00:00:00.000 Started by upstream project "autotest-per-patch" build number 132133 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.090 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.657 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.670 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.683 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.683 > git config core.sparsecheckout # timeout=10 00:00:06.694 > git read-tree -mu HEAD # timeout=10 00:00:06.712 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.732 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.733 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.832 [Pipeline] Start of Pipeline 00:00:06.843 [Pipeline] library 00:00:06.844 Loading library shm_lib@master 00:00:06.844 Library shm_lib@master is cached. Copying from home. 00:00:06.856 [Pipeline] node 00:00:06.866 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.867 [Pipeline] { 00:00:06.875 [Pipeline] catchError 00:00:06.877 [Pipeline] { 00:00:06.886 [Pipeline] wrap 00:00:06.893 [Pipeline] { 00:00:06.902 [Pipeline] stage 00:00:06.904 [Pipeline] { (Prologue) 00:00:06.920 [Pipeline] echo 00:00:06.921 Node: VM-host-SM38 00:00:06.927 [Pipeline] cleanWs 00:00:06.936 [WS-CLEANUP] Deleting project workspace... 00:00:06.936 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.943 [WS-CLEANUP] done 00:00:07.174 [Pipeline] setCustomBuildProperty 00:00:07.266 [Pipeline] httpRequest 00:00:07.840 [Pipeline] echo 00:00:07.841 Sorcerer 10.211.164.101 is alive 00:00:07.848 [Pipeline] retry 00:00:07.849 [Pipeline] { 00:00:07.859 [Pipeline] httpRequest 00:00:07.864 HttpMethod: GET 00:00:07.865 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.865 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.876 Response Code: HTTP/1.1 200 OK 00:00:07.877 Success: Status code 200 is in the accepted range: 200,404 00:00:07.877 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.794 [Pipeline] } 00:00:11.812 [Pipeline] // retry 00:00:11.819 [Pipeline] sh 00:00:12.117 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.149 [Pipeline] httpRequest 00:00:12.543 [Pipeline] echo 00:00:12.545 Sorcerer 10.211.164.101 is alive 00:00:12.555 [Pipeline] retry 00:00:12.557 [Pipeline] { 00:00:12.571 [Pipeline] httpRequest 00:00:12.576 HttpMethod: GET 00:00:12.577 URL: http://10.211.164.101/packages/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:00:12.578 Sending request to url: http://10.211.164.101/packages/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:00:12.597 Response Code: HTTP/1.1 200 OK 00:00:12.598 Success: Status code 200 is in the accepted range: 200,404 00:00:12.598 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:01:03.748 [Pipeline] } 00:01:03.765 [Pipeline] // retry 00:01:03.773 [Pipeline] sh 00:01:04.058 + tar --no-same-owner -xf spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:01:07.379 [Pipeline] sh 00:01:07.664 + git -C spdk log --oneline -n5 00:01:07.664 899af6c35 lib/nvme: destruct controllers that failed init asynchronously 00:01:07.664 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:07.664 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:07.664 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:07.664 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:07.684 [Pipeline] writeFile 00:01:07.699 [Pipeline] sh 00:01:07.987 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.000 [Pipeline] sh 00:01:08.283 + cat autorun-spdk.conf 00:01:08.283 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.283 SPDK_TEST_NVME=1 00:01:08.283 SPDK_TEST_FTL=1 00:01:08.283 SPDK_TEST_ISAL=1 00:01:08.283 SPDK_RUN_ASAN=1 00:01:08.283 SPDK_RUN_UBSAN=1 00:01:08.283 SPDK_TEST_XNVME=1 00:01:08.283 SPDK_TEST_NVME_FDP=1 00:01:08.283 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.291 RUN_NIGHTLY=0 00:01:08.293 [Pipeline] } 00:01:08.306 [Pipeline] // stage 00:01:08.322 [Pipeline] stage 00:01:08.324 [Pipeline] { (Run VM) 00:01:08.336 [Pipeline] sh 00:01:08.620 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:08.620 + echo 'Start stage prepare_nvme.sh' 00:01:08.620 Start stage prepare_nvme.sh 00:01:08.620 + [[ -n 7 ]] 00:01:08.620 + disk_prefix=ex7 00:01:08.620 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:08.620 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:08.620 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:08.620 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.620 ++ SPDK_TEST_NVME=1 00:01:08.620 ++ SPDK_TEST_FTL=1 00:01:08.620 ++ SPDK_TEST_ISAL=1 00:01:08.620 ++ SPDK_RUN_ASAN=1 00:01:08.620 ++ SPDK_RUN_UBSAN=1 00:01:08.620 ++ SPDK_TEST_XNVME=1 00:01:08.620 ++ SPDK_TEST_NVME_FDP=1 00:01:08.620 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.620 ++ RUN_NIGHTLY=0 00:01:08.620 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:08.620 + nvme_files=() 00:01:08.620 + declare -A nvme_files 00:01:08.620 + backend_dir=/var/lib/libvirt/images/backends 00:01:08.620 + nvme_files['nvme.img']=5G 00:01:08.620 + nvme_files['nvme-cmb.img']=5G 00:01:08.620 + nvme_files['nvme-multi0.img']=4G 00:01:08.620 + nvme_files['nvme-multi1.img']=4G 00:01:08.620 + nvme_files['nvme-multi2.img']=4G 00:01:08.620 + nvme_files['nvme-openstack.img']=8G 00:01:08.620 + nvme_files['nvme-zns.img']=5G 00:01:08.620 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:08.620 + (( SPDK_TEST_FTL == 1 )) 00:01:08.620 + nvme_files["nvme-ftl.img"]=6G 00:01:08.620 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:08.620 + nvme_files["nvme-fdp.img"]=1G 00:01:08.620 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:08.620 + for nvme in "${!nvme_files[@]}" 00:01:08.620 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:08.881 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:08.881 + for nvme in "${!nvme_files[@]}" 00:01:08.882 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:01:09.454 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:09.454 + for nvme in "${!nvme_files[@]}" 00:01:09.454 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:09.454 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.454 + for nvme in "${!nvme_files[@]}" 00:01:09.454 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:09.716 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:09.716 + for nvme in "${!nvme_files[@]}" 00:01:09.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:09.716 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.716 + for nvme in "${!nvme_files[@]}" 00:01:09.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:09.977 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.977 + for nvme in "${!nvme_files[@]}" 00:01:09.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:10.551 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.551 + for nvme in "${!nvme_files[@]}" 00:01:10.551 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:01:10.551 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:10.551 + for nvme in "${!nvme_files[@]}" 00:01:10.551 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:11.495 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.495 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:11.495 + echo 'End stage prepare_nvme.sh' 00:01:11.495 End stage prepare_nvme.sh 00:01:11.509 [Pipeline] sh 00:01:11.795 + DISTRO=fedora39 00:01:11.795 + CPUS=10 00:01:11.795 + RAM=12288 00:01:11.795 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.796 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:11.796 00:01:11.796 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:11.796 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:11.796 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:11.796 HELP=0 00:01:11.796 DRY_RUN=0 00:01:11.796 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:01:11.796 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:11.796 NVME_AUTO_CREATE=0 00:01:11.796 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:01:11.796 NVME_CMB=,,,, 00:01:11.796 NVME_PMR=,,,, 00:01:11.796 NVME_ZNS=,,,, 00:01:11.796 NVME_MS=true,,,, 00:01:11.796 NVME_FDP=,,,on, 00:01:11.796 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.796 SPDK_VAGRANT_VMCPU=10 00:01:11.796 SPDK_VAGRANT_VMRAM=12288 00:01:11.796 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.796 SPDK_VAGRANT_HTTP_PROXY= 00:01:11.796 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.796 SPDK_OPENSTACK_NETWORK=0 00:01:11.796 VAGRANT_PACKAGE_BOX=0 00:01:11.796 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.796 FORCE_DISTRO=true 00:01:11.796 VAGRANT_BOX_VERSION= 00:01:11.796 EXTRA_VAGRANTFILES= 00:01:11.796 NIC_MODEL=e1000 00:01:11.796 00:01:11.796 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:11.796 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:14.343 Bringing machine 'default' up with 'libvirt' provider... 00:01:14.604 ==> default: Creating image (snapshot of base box volume). 00:01:14.604 ==> default: Creating domain with the following settings... 00:01:14.604 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730971782_f1c71bac44b87cf4a00a 00:01:14.604 ==> default: -- Domain type: kvm 00:01:14.604 ==> default: -- Cpus: 10 00:01:14.604 ==> default: -- Feature: acpi 00:01:14.604 ==> default: -- Feature: apic 00:01:14.604 ==> default: -- Feature: pae 00:01:14.604 ==> default: -- Memory: 12288M 00:01:14.604 ==> default: -- Memory Backing: hugepages: 00:01:14.604 ==> default: -- Management MAC: 00:01:14.604 ==> default: -- Loader: 00:01:14.604 ==> default: -- Nvram: 00:01:14.604 ==> default: -- Base box: spdk/fedora39 00:01:14.604 ==> default: -- Storage pool: default 00:01:14.605 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730971782_f1c71bac44b87cf4a00a.img (20G) 00:01:14.605 ==> default: -- Volume Cache: default 00:01:14.605 ==> default: -- Kernel: 00:01:14.605 ==> default: -- Initrd: 00:01:14.605 ==> default: -- Graphics Type: vnc 00:01:14.605 ==> default: -- Graphics Port: -1 00:01:14.605 ==> default: -- Graphics IP: 127.0.0.1 00:01:14.605 ==> default: -- Graphics Password: Not defined 00:01:14.605 ==> default: -- Video Type: cirrus 00:01:14.605 ==> default: -- Video VRAM: 9216 00:01:14.605 ==> default: -- Sound Type: 00:01:14.605 ==> default: -- Keymap: en-us 00:01:14.605 ==> default: -- TPM Path: 00:01:14.605 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:14.605 ==> default: -- Command line args: 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:14.605 ==> default: -> value=-drive, 00:01:14.605 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:14.605 ==> default: -> value=-drive, 00:01:14.605 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:14.605 ==> default: -> value=-drive, 00:01:14.605 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.605 ==> default: -> value=-drive, 00:01:14.605 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.605 ==> default: -> value=-drive, 00:01:14.605 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:14.605 ==> default: -> value=-drive, 00:01:14.605 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:14.605 ==> default: -> value=-device, 00:01:14.605 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.605 ==> default: Creating shared folders metadata... 00:01:14.605 ==> default: Starting domain. 00:01:16.519 ==> default: Waiting for domain to get an IP address... 00:01:34.635 ==> default: Waiting for SSH to become available... 00:01:34.635 ==> default: Configuring and enabling network interfaces... 00:01:37.185 default: SSH address: 192.168.121.185:22 00:01:37.448 default: SSH username: vagrant 00:01:37.448 default: SSH auth method: private key 00:01:39.995 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.138 ==> default: Mounting SSHFS shared folder... 00:01:49.537 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:49.537 ==> default: Checking Mount.. 00:01:50.925 ==> default: Folder Successfully Mounted! 00:01:50.925 00:01:50.925 SUCCESS! 00:01:50.925 00:01:50.925 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:50.925 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:50.925 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:50.925 00:01:50.935 [Pipeline] } 00:01:50.945 [Pipeline] // stage 00:01:50.952 [Pipeline] dir 00:01:50.952 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:50.953 [Pipeline] { 00:01:50.963 [Pipeline] catchError 00:01:50.965 [Pipeline] { 00:01:50.974 [Pipeline] sh 00:01:51.258 + vagrant ssh-config --host vagrant 00:01:51.258 + sed -ne '/^Host/,$p' 00:01:51.258 + tee ssh_conf 00:01:54.602 Host vagrant 00:01:54.602 HostName 192.168.121.185 00:01:54.602 User vagrant 00:01:54.602 Port 22 00:01:54.602 UserKnownHostsFile /dev/null 00:01:54.603 StrictHostKeyChecking no 00:01:54.603 PasswordAuthentication no 00:01:54.603 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:54.603 IdentitiesOnly yes 00:01:54.603 LogLevel FATAL 00:01:54.603 ForwardAgent yes 00:01:54.603 ForwardX11 yes 00:01:54.603 00:01:54.616 [Pipeline] withEnv 00:01:54.618 [Pipeline] { 00:01:54.631 [Pipeline] sh 00:01:54.910 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:54.910 source /etc/os-release 00:01:54.910 [[ -e /image.version ]] && img=$(< /image.version) 00:01:54.910 # Minimal, systemd-like check. 00:01:54.910 if [[ -e /.dockerenv ]]; then 00:01:54.910 # Clear garbage from the node'\''s name: 00:01:54.910 # agt-er_autotest_547-896 -> autotest_547-896 00:01:54.910 # $HOSTNAME is the actual container id 00:01:54.910 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:54.910 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:54.910 # We can assume this is a mount from a host where container is running, 00:01:54.910 # so fetch its hostname to easily identify the target swarm worker. 00:01:54.910 container="$(< /etc/hostname) ($agent)" 00:01:54.910 else 00:01:54.910 # Fallback 00:01:54.910 container=$agent 00:01:54.910 fi 00:01:54.910 fi 00:01:54.910 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:54.910 ' 00:01:54.919 [Pipeline] } 00:01:54.936 [Pipeline] // withEnv 00:01:54.945 [Pipeline] setCustomBuildProperty 00:01:54.959 [Pipeline] stage 00:01:54.961 [Pipeline] { (Tests) 00:01:54.977 [Pipeline] sh 00:01:55.255 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:55.559 [Pipeline] sh 00:01:55.838 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:56.111 [Pipeline] timeout 00:01:56.111 Timeout set to expire in 50 min 00:01:56.113 [Pipeline] { 00:01:56.128 [Pipeline] sh 00:01:56.406 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:56.972 HEAD is now at 899af6c35 lib/nvme: destruct controllers that failed init asynchronously 00:01:56.983 [Pipeline] sh 00:01:57.259 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:57.528 [Pipeline] sh 00:01:57.806 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:57.821 [Pipeline] sh 00:01:58.099 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:58.099 ++ readlink -f spdk_repo 00:01:58.099 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:58.099 + [[ -n /home/vagrant/spdk_repo ]] 00:01:58.099 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:58.099 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:58.099 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:58.099 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:58.099 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:58.099 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:58.099 + cd /home/vagrant/spdk_repo 00:01:58.099 + source /etc/os-release 00:01:58.099 ++ NAME='Fedora Linux' 00:01:58.099 ++ VERSION='39 (Cloud Edition)' 00:01:58.099 ++ ID=fedora 00:01:58.099 ++ VERSION_ID=39 00:01:58.099 ++ VERSION_CODENAME= 00:01:58.099 ++ PLATFORM_ID=platform:f39 00:01:58.099 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:58.099 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.099 ++ LOGO=fedora-logo-icon 00:01:58.099 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:58.099 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.099 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:58.099 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.099 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.099 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.099 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:58.099 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.099 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:58.099 ++ SUPPORT_END=2024-11-12 00:01:58.099 ++ VARIANT='Cloud Edition' 00:01:58.099 ++ VARIANT_ID=cloud 00:01:58.099 + uname -a 00:01:58.099 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:58.099 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:58.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:58.671 Hugepages 00:01:58.671 node hugesize free / total 00:01:58.671 node0 1048576kB 0 / 0 00:01:58.671 node0 2048kB 0 / 0 00:01:58.671 00:01:58.671 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:58.929 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:58.929 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:58.929 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:58.929 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:58.929 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:58.929 + rm -f /tmp/spdk-ld-path 00:01:58.929 + source autorun-spdk.conf 00:01:58.929 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.929 ++ SPDK_TEST_NVME=1 00:01:58.929 ++ SPDK_TEST_FTL=1 00:01:58.929 ++ SPDK_TEST_ISAL=1 00:01:58.929 ++ SPDK_RUN_ASAN=1 00:01:58.929 ++ SPDK_RUN_UBSAN=1 00:01:58.929 ++ SPDK_TEST_XNVME=1 00:01:58.929 ++ SPDK_TEST_NVME_FDP=1 00:01:58.929 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.929 ++ RUN_NIGHTLY=0 00:01:58.929 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:58.929 + [[ -n '' ]] 00:01:58.929 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:58.929 + for M in /var/spdk/build-*-manifest.txt 00:01:58.929 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:58.929 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.929 + for M in /var/spdk/build-*-manifest.txt 00:01:58.929 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:58.929 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.929 + for M in /var/spdk/build-*-manifest.txt 00:01:58.929 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:58.929 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.929 ++ uname 00:01:58.929 + [[ Linux == \L\i\n\u\x ]] 00:01:58.929 + sudo dmesg -T 00:01:58.929 + sudo dmesg --clear 00:01:58.929 + dmesg_pid=5025 00:01:58.929 + [[ Fedora Linux == FreeBSD ]] 00:01:58.929 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.929 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.929 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:58.929 + [[ -x /usr/src/fio-static/fio ]] 00:01:58.929 + sudo dmesg -Tw 00:01:58.929 + export FIO_BIN=/usr/src/fio-static/fio 00:01:58.929 + FIO_BIN=/usr/src/fio-static/fio 00:01:58.929 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:58.929 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:58.929 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:58.929 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.929 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.929 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:58.929 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.929 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.930 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.930 09:30:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:58.930 09:30:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.930 09:30:26 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:58.930 09:30:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:58.930 09:30:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:59.187 09:30:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:59.187 09:30:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:59.187 09:30:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:59.187 09:30:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:59.187 09:30:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:59.187 09:30:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:59.187 09:30:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.187 09:30:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.187 09:30:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.187 09:30:26 -- paths/export.sh@5 -- $ export PATH 00:01:59.187 09:30:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.187 09:30:26 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:59.187 09:30:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:59.187 09:30:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730971826.XXXXXX 00:01:59.187 09:30:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730971826.1COi7M 00:01:59.187 09:30:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:59.187 09:30:26 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:59.187 09:30:26 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:59.187 09:30:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:59.187 09:30:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:59.187 09:30:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:59.187 09:30:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:59.187 09:30:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.187 09:30:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:59.188 09:30:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:59.188 09:30:26 -- pm/common@17 -- $ local monitor 00:01:59.188 09:30:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.188 09:30:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.188 09:30:26 -- pm/common@25 -- $ sleep 1 00:01:59.188 09:30:26 -- pm/common@21 -- $ date +%s 00:01:59.188 09:30:26 -- pm/common@21 -- $ date +%s 00:01:59.188 09:30:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730971826 00:01:59.188 09:30:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730971826 00:01:59.188 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730971826_collect-vmstat.pm.log 00:01:59.188 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730971826_collect-cpu-load.pm.log 00:02:00.122 09:30:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:00.122 09:30:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:00.122 09:30:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:00.122 09:30:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:00.122 09:30:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:00.122 Thu Nov 7 09:30:27 AM UTC 2024 00:02:00.122 09:30:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:00.122 v25.01-pre-171-g899af6c35 00:02:00.122 09:30:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:00.122 09:30:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:00.122 09:30:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:00.122 09:30:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:00.122 09:30:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.122 ************************************ 00:02:00.122 START TEST asan 00:02:00.122 ************************************ 00:02:00.122 using asan 00:02:00.122 09:30:27 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:00.122 00:02:00.122 real 0m0.000s 00:02:00.122 user 0m0.000s 00:02:00.122 sys 0m0.000s 00:02:00.122 09:30:27 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:00.122 ************************************ 00:02:00.122 09:30:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.122 END TEST asan 00:02:00.122 ************************************ 00:02:00.122 09:30:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:00.122 09:30:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:00.122 09:30:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:00.122 09:30:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:00.122 09:30:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.122 ************************************ 00:02:00.122 START TEST ubsan 00:02:00.122 ************************************ 00:02:00.122 using ubsan 00:02:00.122 09:30:27 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:00.122 00:02:00.122 real 0m0.000s 00:02:00.122 user 0m0.000s 00:02:00.122 sys 0m0.000s 00:02:00.122 09:30:27 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:00.122 ************************************ 00:02:00.122 END TEST ubsan 00:02:00.122 09:30:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.122 ************************************ 00:02:00.122 09:30:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:00.122 09:30:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:00.122 09:30:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:00.122 09:30:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:00.122 09:30:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:00.122 09:30:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:00.122 09:30:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:00.122 09:30:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:00.122 09:30:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:00.381 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:00.381 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.640 Using 'verbs' RDMA provider 00:02:11.574 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:23.800 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:23.800 Creating mk/config.mk...done. 00:02:23.800 Creating mk/cc.flags.mk...done. 00:02:23.800 Type 'make' to build. 00:02:23.800 09:30:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:23.800 09:30:51 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:23.800 09:30:51 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:23.800 09:30:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.800 ************************************ 00:02:23.800 START TEST make 00:02:23.800 ************************************ 00:02:23.800 09:30:51 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:23.800 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:23.800 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:23.800 meson setup builddir \ 00:02:23.800 -Dwith-libaio=enabled \ 00:02:23.800 -Dwith-liburing=enabled \ 00:02:23.800 -Dwith-libvfn=disabled \ 00:02:23.800 -Dwith-spdk=disabled \ 00:02:23.800 -Dexamples=false \ 00:02:23.800 -Dtests=false \ 00:02:23.800 -Dtools=false && \ 00:02:23.800 meson compile -C builddir && \ 00:02:23.800 cd -) 00:02:23.800 make[1]: Nothing to be done for 'all'. 00:02:26.342 The Meson build system 00:02:26.342 Version: 1.5.0 00:02:26.342 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:26.342 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:26.342 Build type: native build 00:02:26.342 Project name: xnvme 00:02:26.342 Project version: 0.7.5 00:02:26.342 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.342 C linker for the host machine: cc ld.bfd 2.40-14 00:02:26.342 Host machine cpu family: x86_64 00:02:26.342 Host machine cpu: x86_64 00:02:26.342 Message: host_machine.system: linux 00:02:26.342 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:26.342 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:26.342 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:26.342 Run-time dependency threads found: YES 00:02:26.342 Has header "setupapi.h" : NO 00:02:26.342 Has header "linux/blkzoned.h" : YES 00:02:26.342 Has header "linux/blkzoned.h" : YES (cached) 00:02:26.342 Has header "libaio.h" : YES 00:02:26.342 Library aio found: YES 00:02:26.342 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.342 Run-time dependency liburing found: YES 2.2 00:02:26.342 Dependency libvfn skipped: feature with-libvfn disabled 00:02:26.342 Found CMake: /usr/bin/cmake (3.27.7) 00:02:26.342 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:26.342 Subproject spdk : skipped: feature with-spdk disabled 00:02:26.342 Run-time dependency appleframeworks found: NO (tried framework) 00:02:26.342 Run-time dependency appleframeworks found: NO (tried framework) 00:02:26.342 Library rt found: YES 00:02:26.342 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:26.342 Configuring xnvme_config.h using configuration 00:02:26.342 Configuring xnvme.spec using configuration 00:02:26.342 Run-time dependency bash-completion found: YES 2.11 00:02:26.342 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:26.342 Program cp found: YES (/usr/bin/cp) 00:02:26.342 Build targets in project: 3 00:02:26.342 00:02:26.342 xnvme 0.7.5 00:02:26.342 00:02:26.342 Subprojects 00:02:26.342 spdk : NO Feature 'with-spdk' disabled 00:02:26.342 00:02:26.342 User defined options 00:02:26.342 examples : false 00:02:26.342 tests : false 00:02:26.342 tools : false 00:02:26.342 with-libaio : enabled 00:02:26.342 with-liburing: enabled 00:02:26.342 with-libvfn : disabled 00:02:26.342 with-spdk : disabled 00:02:26.342 00:02:26.342 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.602 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:26.602 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:26.602 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:26.602 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:26.602 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:26.602 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:26.602 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:26.602 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:26.602 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:26.602 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:26.602 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:26.602 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:26.862 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:26.862 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:26.863 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:26.863 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:26.863 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:26.863 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:26.863 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:26.863 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:26.863 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:26.863 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:26.863 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:26.863 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:26.863 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:26.863 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:26.863 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:26.863 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:26.863 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:26.863 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:26.863 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:26.863 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:26.863 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:26.863 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:26.863 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:26.863 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:26.863 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:26.863 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:26.863 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:26.863 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:26.863 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:26.863 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:26.863 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:27.123 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:27.123 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:27.123 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:27.123 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:27.123 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:27.123 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:27.123 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:27.123 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:27.123 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:27.123 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:27.123 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:27.123 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:27.123 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:27.123 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:27.123 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:27.123 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:27.123 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:27.123 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:27.123 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:27.123 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:27.123 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:27.123 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:27.123 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:27.123 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:27.383 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:27.383 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:27.383 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:27.383 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:27.383 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:27.383 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:27.383 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:27.644 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:27.644 [75/76] Linking static target lib/libxnvme.a 00:02:27.644 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:27.644 INFO: autodetecting backend as ninja 00:02:27.644 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:27.904 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:34.478 The Meson build system 00:02:34.478 Version: 1.5.0 00:02:34.478 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.478 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.478 Build type: native build 00:02:34.478 Program cat found: YES (/usr/bin/cat) 00:02:34.478 Project name: DPDK 00:02:34.478 Project version: 24.03.0 00:02:34.478 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.478 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.478 Host machine cpu family: x86_64 00:02:34.478 Host machine cpu: x86_64 00:02:34.478 Message: ## Building in Developer Mode ## 00:02:34.478 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.478 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.478 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.478 Program python3 found: YES (/usr/bin/python3) 00:02:34.478 Program cat found: YES (/usr/bin/cat) 00:02:34.478 Compiler for C supports arguments -march=native: YES 00:02:34.478 Checking for size of "void *" : 8 00:02:34.478 Checking for size of "void *" : 8 (cached) 00:02:34.478 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.478 Library m found: YES 00:02:34.478 Library numa found: YES 00:02:34.478 Has header "numaif.h" : YES 00:02:34.478 Library fdt found: NO 00:02:34.478 Library execinfo found: NO 00:02:34.478 Has header "execinfo.h" : YES 00:02:34.478 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.478 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.478 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.478 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.478 Run-time dependency openssl found: YES 3.1.1 00:02:34.478 Run-time dependency libpcap found: YES 1.10.4 00:02:34.478 Has header "pcap.h" with dependency libpcap: YES 00:02:34.478 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.478 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.478 Compiler for C supports arguments -Wformat: YES 00:02:34.478 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.478 Compiler for C supports arguments -Wformat-security: NO 00:02:34.478 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.478 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.478 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.478 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.478 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.478 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.478 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.478 Compiler for C supports arguments -Wundef: YES 00:02:34.478 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.478 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.478 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.478 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.478 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.478 Program objdump found: YES (/usr/bin/objdump) 00:02:34.478 Compiler for C supports arguments -mavx512f: YES 00:02:34.478 Checking if "AVX512 checking" compiles: YES 00:02:34.478 Fetching value of define "__SSE4_2__" : 1 00:02:34.478 Fetching value of define "__AES__" : 1 00:02:34.478 Fetching value of define "__AVX__" : 1 00:02:34.478 Fetching value of define "__AVX2__" : 1 00:02:34.478 Fetching value of define "__AVX512BW__" : 1 00:02:34.478 Fetching value of define "__AVX512CD__" : 1 00:02:34.478 Fetching value of define "__AVX512DQ__" : 1 00:02:34.478 Fetching value of define "__AVX512F__" : 1 00:02:34.478 Fetching value of define "__AVX512VL__" : 1 00:02:34.478 Fetching value of define "__PCLMUL__" : 1 00:02:34.478 Fetching value of define "__RDRND__" : 1 00:02:34.478 Fetching value of define "__RDSEED__" : 1 00:02:34.478 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:34.478 Fetching value of define "__znver1__" : (undefined) 00:02:34.478 Fetching value of define "__znver2__" : (undefined) 00:02:34.478 Fetching value of define "__znver3__" : (undefined) 00:02:34.478 Fetching value of define "__znver4__" : (undefined) 00:02:34.478 Library asan found: YES 00:02:34.478 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.478 Message: lib/log: Defining dependency "log" 00:02:34.478 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.478 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.478 Library rt found: YES 00:02:34.478 Checking for function "getentropy" : NO 00:02:34.478 Message: lib/eal: Defining dependency "eal" 00:02:34.478 Message: lib/ring: Defining dependency "ring" 00:02:34.478 Message: lib/rcu: Defining dependency "rcu" 00:02:34.478 Message: lib/mempool: Defining dependency "mempool" 00:02:34.478 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.478 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.478 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.478 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.478 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.478 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.478 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:34.478 Compiler for C supports arguments -mpclmul: YES 00:02:34.478 Compiler for C supports arguments -maes: YES 00:02:34.478 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.478 Compiler for C supports arguments -mavx512bw: YES 00:02:34.478 Compiler for C supports arguments -mavx512dq: YES 00:02:34.478 Compiler for C supports arguments -mavx512vl: YES 00:02:34.478 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.479 Compiler for C supports arguments -mavx2: YES 00:02:34.479 Compiler for C supports arguments -mavx: YES 00:02:34.479 Message: lib/net: Defining dependency "net" 00:02:34.479 Message: lib/meter: Defining dependency "meter" 00:02:34.479 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.479 Message: lib/pci: Defining dependency "pci" 00:02:34.479 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.479 Message: lib/hash: Defining dependency "hash" 00:02:34.479 Message: lib/timer: Defining dependency "timer" 00:02:34.479 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.479 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.479 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.479 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.479 Message: lib/power: Defining dependency "power" 00:02:34.479 Message: lib/reorder: Defining dependency "reorder" 00:02:34.479 Message: lib/security: Defining dependency "security" 00:02:34.479 Has header "linux/userfaultfd.h" : YES 00:02:34.479 Has header "linux/vduse.h" : YES 00:02:34.479 Message: lib/vhost: Defining dependency "vhost" 00:02:34.479 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.479 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.479 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.479 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.479 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.479 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.479 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.479 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.479 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.479 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.479 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.479 Configuring doxy-api-html.conf using configuration 00:02:34.479 Configuring doxy-api-man.conf using configuration 00:02:34.479 Program mandb found: YES (/usr/bin/mandb) 00:02:34.479 Program sphinx-build found: NO 00:02:34.479 Configuring rte_build_config.h using configuration 00:02:34.479 Message: 00:02:34.479 ================= 00:02:34.479 Applications Enabled 00:02:34.479 ================= 00:02:34.479 00:02:34.479 apps: 00:02:34.479 00:02:34.479 00:02:34.479 Message: 00:02:34.479 ================= 00:02:34.479 Libraries Enabled 00:02:34.479 ================= 00:02:34.479 00:02:34.479 libs: 00:02:34.479 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.479 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.479 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.479 00:02:34.479 Message: 00:02:34.479 =============== 00:02:34.479 Drivers Enabled 00:02:34.479 =============== 00:02:34.479 00:02:34.479 common: 00:02:34.479 00:02:34.479 bus: 00:02:34.479 pci, vdev, 00:02:34.479 mempool: 00:02:34.479 ring, 00:02:34.479 dma: 00:02:34.479 00:02:34.479 net: 00:02:34.479 00:02:34.479 crypto: 00:02:34.479 00:02:34.479 compress: 00:02:34.479 00:02:34.479 vdpa: 00:02:34.479 00:02:34.479 00:02:34.479 Message: 00:02:34.479 ================= 00:02:34.479 Content Skipped 00:02:34.479 ================= 00:02:34.479 00:02:34.479 apps: 00:02:34.479 dumpcap: explicitly disabled via build config 00:02:34.479 graph: explicitly disabled via build config 00:02:34.479 pdump: explicitly disabled via build config 00:02:34.479 proc-info: explicitly disabled via build config 00:02:34.479 test-acl: explicitly disabled via build config 00:02:34.479 test-bbdev: explicitly disabled via build config 00:02:34.479 test-cmdline: explicitly disabled via build config 00:02:34.479 test-compress-perf: explicitly disabled via build config 00:02:34.479 test-crypto-perf: explicitly disabled via build config 00:02:34.479 test-dma-perf: explicitly disabled via build config 00:02:34.479 test-eventdev: explicitly disabled via build config 00:02:34.479 test-fib: explicitly disabled via build config 00:02:34.479 test-flow-perf: explicitly disabled via build config 00:02:34.479 test-gpudev: explicitly disabled via build config 00:02:34.479 test-mldev: explicitly disabled via build config 00:02:34.479 test-pipeline: explicitly disabled via build config 00:02:34.479 test-pmd: explicitly disabled via build config 00:02:34.479 test-regex: explicitly disabled via build config 00:02:34.479 test-sad: explicitly disabled via build config 00:02:34.479 test-security-perf: explicitly disabled via build config 00:02:34.479 00:02:34.479 libs: 00:02:34.479 argparse: explicitly disabled via build config 00:02:34.479 metrics: explicitly disabled via build config 00:02:34.479 acl: explicitly disabled via build config 00:02:34.479 bbdev: explicitly disabled via build config 00:02:34.479 bitratestats: explicitly disabled via build config 00:02:34.479 bpf: explicitly disabled via build config 00:02:34.479 cfgfile: explicitly disabled via build config 00:02:34.479 distributor: explicitly disabled via build config 00:02:34.479 efd: explicitly disabled via build config 00:02:34.479 eventdev: explicitly disabled via build config 00:02:34.479 dispatcher: explicitly disabled via build config 00:02:34.479 gpudev: explicitly disabled via build config 00:02:34.479 gro: explicitly disabled via build config 00:02:34.479 gso: explicitly disabled via build config 00:02:34.479 ip_frag: explicitly disabled via build config 00:02:34.479 jobstats: explicitly disabled via build config 00:02:34.479 latencystats: explicitly disabled via build config 00:02:34.479 lpm: explicitly disabled via build config 00:02:34.479 member: explicitly disabled via build config 00:02:34.479 pcapng: explicitly disabled via build config 00:02:34.479 rawdev: explicitly disabled via build config 00:02:34.479 regexdev: explicitly disabled via build config 00:02:34.479 mldev: explicitly disabled via build config 00:02:34.479 rib: explicitly disabled via build config 00:02:34.479 sched: explicitly disabled via build config 00:02:34.479 stack: explicitly disabled via build config 00:02:34.479 ipsec: explicitly disabled via build config 00:02:34.479 pdcp: explicitly disabled via build config 00:02:34.479 fib: explicitly disabled via build config 00:02:34.479 port: explicitly disabled via build config 00:02:34.479 pdump: explicitly disabled via build config 00:02:34.479 table: explicitly disabled via build config 00:02:34.479 pipeline: explicitly disabled via build config 00:02:34.479 graph: explicitly disabled via build config 00:02:34.479 node: explicitly disabled via build config 00:02:34.479 00:02:34.479 drivers: 00:02:34.479 common/cpt: not in enabled drivers build config 00:02:34.479 common/dpaax: not in enabled drivers build config 00:02:34.479 common/iavf: not in enabled drivers build config 00:02:34.479 common/idpf: not in enabled drivers build config 00:02:34.479 common/ionic: not in enabled drivers build config 00:02:34.479 common/mvep: not in enabled drivers build config 00:02:34.479 common/octeontx: not in enabled drivers build config 00:02:34.479 bus/auxiliary: not in enabled drivers build config 00:02:34.479 bus/cdx: not in enabled drivers build config 00:02:34.479 bus/dpaa: not in enabled drivers build config 00:02:34.479 bus/fslmc: not in enabled drivers build config 00:02:34.479 bus/ifpga: not in enabled drivers build config 00:02:34.479 bus/platform: not in enabled drivers build config 00:02:34.479 bus/uacce: not in enabled drivers build config 00:02:34.479 bus/vmbus: not in enabled drivers build config 00:02:34.479 common/cnxk: not in enabled drivers build config 00:02:34.479 common/mlx5: not in enabled drivers build config 00:02:34.479 common/nfp: not in enabled drivers build config 00:02:34.479 common/nitrox: not in enabled drivers build config 00:02:34.479 common/qat: not in enabled drivers build config 00:02:34.479 common/sfc_efx: not in enabled drivers build config 00:02:34.479 mempool/bucket: not in enabled drivers build config 00:02:34.479 mempool/cnxk: not in enabled drivers build config 00:02:34.479 mempool/dpaa: not in enabled drivers build config 00:02:34.479 mempool/dpaa2: not in enabled drivers build config 00:02:34.479 mempool/octeontx: not in enabled drivers build config 00:02:34.479 mempool/stack: not in enabled drivers build config 00:02:34.479 dma/cnxk: not in enabled drivers build config 00:02:34.479 dma/dpaa: not in enabled drivers build config 00:02:34.479 dma/dpaa2: not in enabled drivers build config 00:02:34.479 dma/hisilicon: not in enabled drivers build config 00:02:34.479 dma/idxd: not in enabled drivers build config 00:02:34.479 dma/ioat: not in enabled drivers build config 00:02:34.479 dma/skeleton: not in enabled drivers build config 00:02:34.479 net/af_packet: not in enabled drivers build config 00:02:34.479 net/af_xdp: not in enabled drivers build config 00:02:34.479 net/ark: not in enabled drivers build config 00:02:34.479 net/atlantic: not in enabled drivers build config 00:02:34.479 net/avp: not in enabled drivers build config 00:02:34.479 net/axgbe: not in enabled drivers build config 00:02:34.479 net/bnx2x: not in enabled drivers build config 00:02:34.479 net/bnxt: not in enabled drivers build config 00:02:34.479 net/bonding: not in enabled drivers build config 00:02:34.479 net/cnxk: not in enabled drivers build config 00:02:34.479 net/cpfl: not in enabled drivers build config 00:02:34.479 net/cxgbe: not in enabled drivers build config 00:02:34.479 net/dpaa: not in enabled drivers build config 00:02:34.479 net/dpaa2: not in enabled drivers build config 00:02:34.479 net/e1000: not in enabled drivers build config 00:02:34.479 net/ena: not in enabled drivers build config 00:02:34.479 net/enetc: not in enabled drivers build config 00:02:34.479 net/enetfec: not in enabled drivers build config 00:02:34.479 net/enic: not in enabled drivers build config 00:02:34.479 net/failsafe: not in enabled drivers build config 00:02:34.479 net/fm10k: not in enabled drivers build config 00:02:34.479 net/gve: not in enabled drivers build config 00:02:34.479 net/hinic: not in enabled drivers build config 00:02:34.479 net/hns3: not in enabled drivers build config 00:02:34.479 net/i40e: not in enabled drivers build config 00:02:34.479 net/iavf: not in enabled drivers build config 00:02:34.479 net/ice: not in enabled drivers build config 00:02:34.480 net/idpf: not in enabled drivers build config 00:02:34.480 net/igc: not in enabled drivers build config 00:02:34.480 net/ionic: not in enabled drivers build config 00:02:34.480 net/ipn3ke: not in enabled drivers build config 00:02:34.480 net/ixgbe: not in enabled drivers build config 00:02:34.480 net/mana: not in enabled drivers build config 00:02:34.480 net/memif: not in enabled drivers build config 00:02:34.480 net/mlx4: not in enabled drivers build config 00:02:34.480 net/mlx5: not in enabled drivers build config 00:02:34.480 net/mvneta: not in enabled drivers build config 00:02:34.480 net/mvpp2: not in enabled drivers build config 00:02:34.480 net/netvsc: not in enabled drivers build config 00:02:34.480 net/nfb: not in enabled drivers build config 00:02:34.480 net/nfp: not in enabled drivers build config 00:02:34.480 net/ngbe: not in enabled drivers build config 00:02:34.480 net/null: not in enabled drivers build config 00:02:34.480 net/octeontx: not in enabled drivers build config 00:02:34.480 net/octeon_ep: not in enabled drivers build config 00:02:34.480 net/pcap: not in enabled drivers build config 00:02:34.480 net/pfe: not in enabled drivers build config 00:02:34.480 net/qede: not in enabled drivers build config 00:02:34.480 net/ring: not in enabled drivers build config 00:02:34.480 net/sfc: not in enabled drivers build config 00:02:34.480 net/softnic: not in enabled drivers build config 00:02:34.480 net/tap: not in enabled drivers build config 00:02:34.480 net/thunderx: not in enabled drivers build config 00:02:34.480 net/txgbe: not in enabled drivers build config 00:02:34.480 net/vdev_netvsc: not in enabled drivers build config 00:02:34.480 net/vhost: not in enabled drivers build config 00:02:34.480 net/virtio: not in enabled drivers build config 00:02:34.480 net/vmxnet3: not in enabled drivers build config 00:02:34.480 raw/*: missing internal dependency, "rawdev" 00:02:34.480 crypto/armv8: not in enabled drivers build config 00:02:34.480 crypto/bcmfs: not in enabled drivers build config 00:02:34.480 crypto/caam_jr: not in enabled drivers build config 00:02:34.480 crypto/ccp: not in enabled drivers build config 00:02:34.480 crypto/cnxk: not in enabled drivers build config 00:02:34.480 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.480 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.480 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.480 crypto/mlx5: not in enabled drivers build config 00:02:34.480 crypto/mvsam: not in enabled drivers build config 00:02:34.480 crypto/nitrox: not in enabled drivers build config 00:02:34.480 crypto/null: not in enabled drivers build config 00:02:34.480 crypto/octeontx: not in enabled drivers build config 00:02:34.480 crypto/openssl: not in enabled drivers build config 00:02:34.480 crypto/scheduler: not in enabled drivers build config 00:02:34.480 crypto/uadk: not in enabled drivers build config 00:02:34.480 crypto/virtio: not in enabled drivers build config 00:02:34.480 compress/isal: not in enabled drivers build config 00:02:34.480 compress/mlx5: not in enabled drivers build config 00:02:34.480 compress/nitrox: not in enabled drivers build config 00:02:34.480 compress/octeontx: not in enabled drivers build config 00:02:34.480 compress/zlib: not in enabled drivers build config 00:02:34.480 regex/*: missing internal dependency, "regexdev" 00:02:34.480 ml/*: missing internal dependency, "mldev" 00:02:34.480 vdpa/ifc: not in enabled drivers build config 00:02:34.480 vdpa/mlx5: not in enabled drivers build config 00:02:34.480 vdpa/nfp: not in enabled drivers build config 00:02:34.480 vdpa/sfc: not in enabled drivers build config 00:02:34.480 event/*: missing internal dependency, "eventdev" 00:02:34.480 baseband/*: missing internal dependency, "bbdev" 00:02:34.480 gpu/*: missing internal dependency, "gpudev" 00:02:34.480 00:02:34.480 00:02:34.480 Build targets in project: 84 00:02:34.480 00:02:34.480 DPDK 24.03.0 00:02:34.480 00:02:34.480 User defined options 00:02:34.480 buildtype : debug 00:02:34.480 default_library : shared 00:02:34.480 libdir : lib 00:02:34.480 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.480 b_sanitize : address 00:02:34.480 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.480 c_link_args : 00:02:34.480 cpu_instruction_set: native 00:02:34.480 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.480 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.480 enable_docs : false 00:02:34.480 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.480 enable_kmods : false 00:02:34.480 max_lcores : 128 00:02:34.480 tests : false 00:02:34.480 00:02:34.480 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.738 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:34.738 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.738 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.738 [3/267] Linking static target lib/librte_kvargs.a 00:02:34.997 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.997 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.997 [6/267] Linking static target lib/librte_log.a 00:02:35.256 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.256 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.256 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.256 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.256 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.256 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.256 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.256 [14/267] Linking static target lib/librte_telemetry.a 00:02:35.256 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.256 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.256 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.256 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.514 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.773 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.773 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.773 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.773 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.773 [24/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.773 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.773 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.773 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.773 [28/267] Linking target lib/librte_log.so.24.1 00:02:36.032 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.032 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.032 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.032 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.032 [33/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:36.032 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.032 [35/267] Linking target lib/librte_kvargs.so.24.1 00:02:36.032 [36/267] Linking target lib/librte_telemetry.so.24.1 00:02:36.319 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.319 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.319 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.319 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.319 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.319 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.319 [43/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:36.319 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.319 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.578 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.578 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.578 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.578 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.578 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.578 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.837 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.837 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.837 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.837 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.837 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.837 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.837 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.837 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.837 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.096 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.096 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.096 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.096 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.354 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.354 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.354 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.354 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.354 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.354 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.613 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.613 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.613 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.613 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.613 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.613 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.613 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.872 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.872 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.872 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.872 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.872 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.130 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.130 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.130 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.130 [86/267] Linking static target lib/librte_ring.a 00:02:38.130 [87/267] Linking static target lib/librte_eal.a 00:02:38.388 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.388 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.388 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.388 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.388 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.646 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.646 [94/267] Linking static target lib/librte_rcu.a 00:02:38.646 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.646 [96/267] Linking static target lib/librte_mempool.a 00:02:38.646 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.903 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.903 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:38.903 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.903 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.903 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.903 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.903 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.903 [105/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.903 [106/267] Linking static target lib/librte_mbuf.a 00:02:39.162 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:39.162 [108/267] Linking static target lib/librte_net.a 00:02:39.162 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.162 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.421 [111/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.421 [112/267] Linking static target lib/librte_meter.a 00:02:39.421 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.421 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.421 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.678 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.678 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.679 [118/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.679 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.936 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.936 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.936 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.194 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.194 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.194 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.194 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.194 [127/267] Linking static target lib/librte_pci.a 00:02:40.194 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.194 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.194 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.452 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:40.452 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:40.452 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.452 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.452 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.452 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.452 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.452 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.452 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.452 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.452 [141/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.452 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.710 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.710 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:40.710 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.710 [146/267] Linking static target lib/librte_cmdline.a 00:02:40.710 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:40.968 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:40.968 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:40.968 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:40.968 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:40.968 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:40.968 [153/267] Linking static target lib/librte_timer.a 00:02:41.226 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.226 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:41.226 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.226 [157/267] Linking static target lib/librte_ethdev.a 00:02:41.226 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.484 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.484 [160/267] Linking static target lib/librte_hash.a 00:02:41.484 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.484 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.484 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.484 [164/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:41.484 [165/267] Linking static target lib/librte_compressdev.a 00:02:41.484 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.742 [167/267] Linking static target lib/librte_dmadev.a 00:02:41.742 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.742 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.742 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.000 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.000 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.000 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.258 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.258 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.258 [176/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.258 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.258 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.258 [179/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.258 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.258 [181/267] Linking static target lib/librte_cryptodev.a 00:02:42.258 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.515 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.774 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.774 [185/267] Linking static target lib/librte_power.a 00:02:42.774 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.774 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.774 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.774 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:42.774 [190/267] Linking static target lib/librte_reorder.a 00:02:42.774 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.054 [192/267] Linking static target lib/librte_security.a 00:02:43.313 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.313 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.313 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:43.571 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.571 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.571 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:43.571 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.829 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:43.829 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.829 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:43.829 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.829 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.087 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.087 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.087 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.087 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.087 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.345 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.345 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.345 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.345 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.345 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.345 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:44.345 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.345 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:44.345 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.345 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.345 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.604 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.604 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.604 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.604 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.604 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:44.862 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.429 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.364 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.364 [229/267] Linking target lib/librte_eal.so.24.1 00:02:46.364 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:46.364 [231/267] Linking target lib/librte_timer.so.24.1 00:02:46.364 [232/267] Linking target lib/librte_ring.so.24.1 00:02:46.364 [233/267] Linking target lib/librte_pci.so.24.1 00:02:46.364 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:46.364 [235/267] Linking target lib/librte_meter.so.24.1 00:02:46.623 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:46.623 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:46.623 [238/267] Linking target lib/librte_mempool.so.24.1 00:02:46.623 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:46.623 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:46.623 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:46.623 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:46.623 [243/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:46.623 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:46.623 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:46.623 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:46.623 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:46.623 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:46.881 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:46.881 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:46.881 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:02:46.881 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:46.881 [253/267] Linking target lib/librte_net.so.24.1 00:02:46.881 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:46.881 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:47.138 [256/267] Linking target lib/librte_hash.so.24.1 00:02:47.138 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:47.138 [258/267] Linking target lib/librte_security.so.24.1 00:02:47.138 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:47.138 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.138 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:47.397 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:47.397 [263/267] Linking target lib/librte_power.so.24.1 00:02:48.771 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.029 [265/267] Linking static target lib/librte_vhost.a 00:02:50.412 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.412 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:50.412 INFO: autodetecting backend as ninja 00:02:50.412 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:05.423 CC lib/ut_mock/mock.o 00:03:05.423 CC lib/log/log.o 00:03:05.423 CC lib/log/log_flags.o 00:03:05.423 CC lib/log/log_deprecated.o 00:03:05.423 CC lib/ut/ut.o 00:03:05.423 LIB libspdk_log.a 00:03:05.423 LIB libspdk_ut_mock.a 00:03:05.423 LIB libspdk_ut.a 00:03:05.423 SO libspdk_ut_mock.so.6.0 00:03:05.423 SO libspdk_ut.so.2.0 00:03:05.423 SO libspdk_log.so.7.1 00:03:05.423 SYMLINK libspdk_ut_mock.so 00:03:05.423 SYMLINK libspdk_ut.so 00:03:05.423 SYMLINK libspdk_log.so 00:03:05.423 CC lib/util/base64.o 00:03:05.423 CC lib/util/bit_array.o 00:03:05.423 CC lib/util/cpuset.o 00:03:05.423 CC lib/util/crc32.o 00:03:05.423 CC lib/util/crc32c.o 00:03:05.423 CC lib/util/crc16.o 00:03:05.423 CC lib/dma/dma.o 00:03:05.423 CXX lib/trace_parser/trace.o 00:03:05.423 CC lib/ioat/ioat.o 00:03:05.423 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.423 CC lib/util/crc32_ieee.o 00:03:05.423 CC lib/util/crc64.o 00:03:05.423 CC lib/util/dif.o 00:03:05.423 CC lib/util/fd.o 00:03:05.423 LIB libspdk_dma.a 00:03:05.423 CC lib/util/fd_group.o 00:03:05.423 CC lib/util/file.o 00:03:05.423 SO libspdk_dma.so.5.0 00:03:05.423 CC lib/util/hexlify.o 00:03:05.423 CC lib/util/iov.o 00:03:05.423 LIB libspdk_ioat.a 00:03:05.423 SYMLINK libspdk_dma.so 00:03:05.423 CC lib/util/math.o 00:03:05.423 CC lib/util/net.o 00:03:05.423 SO libspdk_ioat.so.7.0 00:03:05.423 CC lib/util/pipe.o 00:03:05.423 CC lib/vfio_user/host/vfio_user.o 00:03:05.423 SYMLINK libspdk_ioat.so 00:03:05.423 CC lib/util/strerror_tls.o 00:03:05.423 CC lib/util/string.o 00:03:05.423 CC lib/util/uuid.o 00:03:05.423 CC lib/util/xor.o 00:03:05.423 CC lib/util/zipf.o 00:03:05.423 CC lib/util/md5.o 00:03:05.423 LIB libspdk_vfio_user.a 00:03:05.423 SO libspdk_vfio_user.so.5.0 00:03:05.423 SYMLINK libspdk_vfio_user.so 00:03:05.423 LIB libspdk_util.a 00:03:05.423 SO libspdk_util.so.10.1 00:03:05.423 LIB libspdk_trace_parser.a 00:03:05.423 SYMLINK libspdk_util.so 00:03:05.423 SO libspdk_trace_parser.so.6.0 00:03:05.423 SYMLINK libspdk_trace_parser.so 00:03:05.423 CC lib/json/json_parse.o 00:03:05.423 CC lib/json/json_util.o 00:03:05.423 CC lib/json/json_write.o 00:03:05.423 CC lib/conf/conf.o 00:03:05.423 CC lib/vmd/vmd.o 00:03:05.423 CC lib/vmd/led.o 00:03:05.423 CC lib/rdma_utils/rdma_utils.o 00:03:05.423 CC lib/env_dpdk/env.o 00:03:05.423 CC lib/idxd/idxd.o 00:03:05.423 CC lib/idxd/idxd_user.o 00:03:05.423 CC lib/env_dpdk/memory.o 00:03:05.423 LIB libspdk_conf.a 00:03:05.423 CC lib/env_dpdk/pci.o 00:03:05.423 CC lib/idxd/idxd_kernel.o 00:03:05.423 SO libspdk_conf.so.6.0 00:03:05.423 LIB libspdk_rdma_utils.a 00:03:05.423 CC lib/env_dpdk/init.o 00:03:05.423 SYMLINK libspdk_conf.so 00:03:05.423 SO libspdk_rdma_utils.so.1.0 00:03:05.423 LIB libspdk_json.a 00:03:05.423 CC lib/env_dpdk/threads.o 00:03:05.423 SO libspdk_json.so.6.0 00:03:05.423 SYMLINK libspdk_rdma_utils.so 00:03:05.423 CC lib/env_dpdk/pci_ioat.o 00:03:05.423 SYMLINK libspdk_json.so 00:03:05.423 CC lib/env_dpdk/pci_virtio.o 00:03:05.423 CC lib/env_dpdk/pci_vmd.o 00:03:05.423 CC lib/env_dpdk/pci_idxd.o 00:03:05.423 CC lib/env_dpdk/pci_event.o 00:03:05.680 CC lib/env_dpdk/sigbus_handler.o 00:03:05.680 CC lib/env_dpdk/pci_dpdk.o 00:03:05.680 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:05.680 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:05.680 LIB libspdk_idxd.a 00:03:05.680 LIB libspdk_vmd.a 00:03:05.680 SO libspdk_idxd.so.12.1 00:03:05.680 SO libspdk_vmd.so.6.0 00:03:05.937 SYMLINK libspdk_idxd.so 00:03:05.938 CC lib/rdma_provider/common.o 00:03:05.938 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:05.938 SYMLINK libspdk_vmd.so 00:03:05.938 CC lib/jsonrpc/jsonrpc_server.o 00:03:05.938 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:05.938 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:05.938 CC lib/jsonrpc/jsonrpc_client.o 00:03:05.938 LIB libspdk_rdma_provider.a 00:03:05.938 SO libspdk_rdma_provider.so.7.0 00:03:06.196 LIB libspdk_jsonrpc.a 00:03:06.196 SYMLINK libspdk_rdma_provider.so 00:03:06.196 SO libspdk_jsonrpc.so.6.0 00:03:06.196 SYMLINK libspdk_jsonrpc.so 00:03:06.486 CC lib/rpc/rpc.o 00:03:06.486 LIB libspdk_env_dpdk.a 00:03:06.772 LIB libspdk_rpc.a 00:03:06.772 SO libspdk_rpc.so.6.0 00:03:06.772 SO libspdk_env_dpdk.so.15.1 00:03:06.772 SYMLINK libspdk_rpc.so 00:03:06.772 SYMLINK libspdk_env_dpdk.so 00:03:06.772 CC lib/keyring/keyring_rpc.o 00:03:06.772 CC lib/keyring/keyring.o 00:03:06.772 CC lib/notify/notify.o 00:03:06.772 CC lib/notify/notify_rpc.o 00:03:06.772 CC lib/trace/trace.o 00:03:06.772 CC lib/trace/trace_flags.o 00:03:06.772 CC lib/trace/trace_rpc.o 00:03:07.030 LIB libspdk_notify.a 00:03:07.030 SO libspdk_notify.so.6.0 00:03:07.030 LIB libspdk_keyring.a 00:03:07.030 SYMLINK libspdk_notify.so 00:03:07.030 LIB libspdk_trace.a 00:03:07.030 SO libspdk_keyring.so.2.0 00:03:07.030 SO libspdk_trace.so.11.0 00:03:07.288 SYMLINK libspdk_keyring.so 00:03:07.288 SYMLINK libspdk_trace.so 00:03:07.288 CC lib/thread/iobuf.o 00:03:07.288 CC lib/thread/thread.o 00:03:07.547 CC lib/sock/sock.o 00:03:07.547 CC lib/sock/sock_rpc.o 00:03:07.805 LIB libspdk_sock.a 00:03:07.805 SO libspdk_sock.so.10.0 00:03:07.805 SYMLINK libspdk_sock.so 00:03:08.063 CC lib/nvme/nvme_ctrlr.o 00:03:08.063 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.063 CC lib/nvme/nvme_fabric.o 00:03:08.063 CC lib/nvme/nvme_ns_cmd.o 00:03:08.063 CC lib/nvme/nvme_qpair.o 00:03:08.063 CC lib/nvme/nvme_ns.o 00:03:08.063 CC lib/nvme/nvme.o 00:03:08.063 CC lib/nvme/nvme_pcie.o 00:03:08.063 CC lib/nvme/nvme_pcie_common.o 00:03:08.997 CC lib/nvme/nvme_quirks.o 00:03:08.997 CC lib/nvme/nvme_transport.o 00:03:08.997 CC lib/nvme/nvme_discovery.o 00:03:08.997 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:08.997 LIB libspdk_thread.a 00:03:08.997 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:08.997 SO libspdk_thread.so.11.0 00:03:08.997 CC lib/nvme/nvme_tcp.o 00:03:08.997 SYMLINK libspdk_thread.so 00:03:08.997 CC lib/nvme/nvme_opal.o 00:03:08.997 CC lib/nvme/nvme_io_msg.o 00:03:09.255 CC lib/nvme/nvme_poll_group.o 00:03:09.255 CC lib/nvme/nvme_zns.o 00:03:09.513 CC lib/nvme/nvme_stubs.o 00:03:09.513 CC lib/nvme/nvme_auth.o 00:03:09.513 CC lib/nvme/nvme_cuse.o 00:03:09.513 CC lib/nvme/nvme_rdma.o 00:03:09.771 CC lib/accel/accel.o 00:03:09.771 CC lib/accel/accel_rpc.o 00:03:09.771 CC lib/blob/blobstore.o 00:03:10.029 CC lib/init/json_config.o 00:03:10.029 CC lib/virtio/virtio.o 00:03:10.029 CC lib/accel/accel_sw.o 00:03:10.288 CC lib/init/subsystem.o 00:03:10.288 CC lib/init/subsystem_rpc.o 00:03:10.288 CC lib/virtio/virtio_vhost_user.o 00:03:10.288 CC lib/virtio/virtio_vfio_user.o 00:03:10.288 CC lib/init/rpc.o 00:03:10.288 CC lib/blob/request.o 00:03:10.288 CC lib/blob/zeroes.o 00:03:10.546 CC lib/blob/blob_bs_dev.o 00:03:10.546 CC lib/virtio/virtio_pci.o 00:03:10.546 LIB libspdk_init.a 00:03:10.546 SO libspdk_init.so.6.0 00:03:10.546 SYMLINK libspdk_init.so 00:03:10.805 CC lib/fsdev/fsdev.o 00:03:10.805 CC lib/fsdev/fsdev_rpc.o 00:03:10.805 CC lib/fsdev/fsdev_io.o 00:03:10.805 CC lib/event/app.o 00:03:10.805 CC lib/event/reactor.o 00:03:10.805 CC lib/event/log_rpc.o 00:03:10.805 LIB libspdk_virtio.a 00:03:10.805 CC lib/event/app_rpc.o 00:03:10.805 SO libspdk_virtio.so.7.0 00:03:10.805 LIB libspdk_nvme.a 00:03:10.805 SYMLINK libspdk_virtio.so 00:03:10.805 CC lib/event/scheduler_static.o 00:03:11.063 LIB libspdk_accel.a 00:03:11.063 SO libspdk_accel.so.16.0 00:03:11.063 SO libspdk_nvme.so.15.0 00:03:11.063 SYMLINK libspdk_accel.so 00:03:11.320 LIB libspdk_event.a 00:03:11.320 SO libspdk_event.so.14.0 00:03:11.320 SYMLINK libspdk_nvme.so 00:03:11.320 LIB libspdk_fsdev.a 00:03:11.320 CC lib/bdev/bdev.o 00:03:11.320 CC lib/bdev/bdev_rpc.o 00:03:11.320 CC lib/bdev/bdev_zone.o 00:03:11.320 CC lib/bdev/part.o 00:03:11.320 CC lib/bdev/scsi_nvme.o 00:03:11.320 SYMLINK libspdk_event.so 00:03:11.320 SO libspdk_fsdev.so.2.0 00:03:11.320 SYMLINK libspdk_fsdev.so 00:03:11.577 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:12.514 LIB libspdk_fuse_dispatcher.a 00:03:12.514 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.514 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.084 LIB libspdk_blob.a 00:03:13.342 SO libspdk_blob.so.11.0 00:03:13.342 SYMLINK libspdk_blob.so 00:03:13.600 CC lib/blobfs/blobfs.o 00:03:13.600 CC lib/blobfs/tree.o 00:03:13.600 CC lib/lvol/lvol.o 00:03:14.165 LIB libspdk_bdev.a 00:03:14.165 SO libspdk_bdev.so.17.0 00:03:14.165 SYMLINK libspdk_bdev.so 00:03:14.422 CC lib/nbd/nbd.o 00:03:14.422 CC lib/nbd/nbd_rpc.o 00:03:14.422 CC lib/nvmf/ctrlr.o 00:03:14.422 CC lib/ublk/ublk.o 00:03:14.422 CC lib/ublk/ublk_rpc.o 00:03:14.422 CC lib/nvmf/ctrlr_discovery.o 00:03:14.422 CC lib/scsi/dev.o 00:03:14.422 CC lib/ftl/ftl_core.o 00:03:14.422 LIB libspdk_blobfs.a 00:03:14.422 SO libspdk_blobfs.so.10.0 00:03:14.422 CC lib/scsi/lun.o 00:03:14.422 SYMLINK libspdk_blobfs.so 00:03:14.422 CC lib/scsi/port.o 00:03:14.734 CC lib/nvmf/ctrlr_bdev.o 00:03:14.734 LIB libspdk_lvol.a 00:03:14.734 SO libspdk_lvol.so.10.0 00:03:14.734 CC lib/scsi/scsi.o 00:03:14.734 CC lib/scsi/scsi_bdev.o 00:03:14.734 SYMLINK libspdk_lvol.so 00:03:14.734 CC lib/scsi/scsi_pr.o 00:03:14.734 CC lib/scsi/scsi_rpc.o 00:03:14.734 CC lib/ftl/ftl_init.o 00:03:14.734 LIB libspdk_nbd.a 00:03:14.734 CC lib/scsi/task.o 00:03:14.995 SO libspdk_nbd.so.7.0 00:03:14.995 CC lib/nvmf/subsystem.o 00:03:14.995 SYMLINK libspdk_nbd.so 00:03:14.995 CC lib/nvmf/nvmf.o 00:03:14.995 CC lib/ftl/ftl_layout.o 00:03:14.995 CC lib/ftl/ftl_debug.o 00:03:14.995 CC lib/nvmf/nvmf_rpc.o 00:03:14.995 CC lib/nvmf/transport.o 00:03:14.995 LIB libspdk_scsi.a 00:03:14.995 LIB libspdk_ublk.a 00:03:14.995 SO libspdk_ublk.so.3.0 00:03:14.995 SO libspdk_scsi.so.9.0 00:03:15.254 SYMLINK libspdk_ublk.so 00:03:15.254 CC lib/nvmf/tcp.o 00:03:15.254 SYMLINK libspdk_scsi.so 00:03:15.254 CC lib/nvmf/stubs.o 00:03:15.254 CC lib/ftl/ftl_io.o 00:03:15.254 CC lib/ftl/ftl_sb.o 00:03:15.254 CC lib/nvmf/mdns_server.o 00:03:15.513 CC lib/ftl/ftl_l2p.o 00:03:15.513 CC lib/ftl/ftl_l2p_flat.o 00:03:15.513 CC lib/nvmf/rdma.o 00:03:15.513 CC lib/ftl/ftl_nv_cache.o 00:03:15.513 CC lib/ftl/ftl_band.o 00:03:15.773 CC lib/nvmf/auth.o 00:03:15.773 CC lib/ftl/ftl_band_ops.o 00:03:15.773 CC lib/iscsi/conn.o 00:03:16.032 CC lib/vhost/vhost.o 00:03:16.032 CC lib/ftl/ftl_writer.o 00:03:16.032 CC lib/ftl/ftl_rq.o 00:03:16.032 CC lib/ftl/ftl_reloc.o 00:03:16.290 CC lib/ftl/ftl_l2p_cache.o 00:03:16.290 CC lib/ftl/ftl_p2l.o 00:03:16.290 CC lib/ftl/ftl_p2l_log.o 00:03:16.290 CC lib/ftl/mngt/ftl_mngt.o 00:03:16.548 CC lib/iscsi/init_grp.o 00:03:16.548 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:16.548 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:16.548 CC lib/vhost/vhost_rpc.o 00:03:16.548 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:16.548 CC lib/vhost/vhost_scsi.o 00:03:16.548 CC lib/iscsi/iscsi.o 00:03:16.548 CC lib/iscsi/param.o 00:03:16.548 CC lib/iscsi/portal_grp.o 00:03:16.548 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:16.548 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:16.806 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:16.806 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:16.806 CC lib/vhost/vhost_blk.o 00:03:16.806 CC lib/iscsi/tgt_node.o 00:03:16.806 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.064 CC lib/iscsi/iscsi_subsystem.o 00:03:17.064 CC lib/vhost/rte_vhost_user.o 00:03:17.064 CC lib/iscsi/iscsi_rpc.o 00:03:17.064 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.321 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.321 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.321 CC lib/iscsi/task.o 00:03:17.321 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.321 CC lib/ftl/utils/ftl_conf.o 00:03:17.321 CC lib/ftl/utils/ftl_md.o 00:03:17.321 CC lib/ftl/utils/ftl_mempool.o 00:03:17.580 CC lib/ftl/utils/ftl_bitmap.o 00:03:17.580 CC lib/ftl/utils/ftl_property.o 00:03:17.580 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:17.580 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:17.580 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:17.580 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:17.580 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:17.838 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:17.838 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:17.838 LIB libspdk_nvmf.a 00:03:17.838 LIB libspdk_vhost.a 00:03:17.838 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:17.838 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:17.838 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:17.838 LIB libspdk_iscsi.a 00:03:17.838 SO libspdk_vhost.so.8.0 00:03:17.838 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:17.838 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:17.838 SO libspdk_nvmf.so.20.0 00:03:17.838 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:17.838 SO libspdk_iscsi.so.8.0 00:03:17.838 CC lib/ftl/base/ftl_base_dev.o 00:03:17.838 SYMLINK libspdk_vhost.so 00:03:17.838 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.096 CC lib/ftl/ftl_trace.o 00:03:18.096 SYMLINK libspdk_iscsi.so 00:03:18.096 SYMLINK libspdk_nvmf.so 00:03:18.096 LIB libspdk_ftl.a 00:03:18.354 SO libspdk_ftl.so.9.0 00:03:18.612 SYMLINK libspdk_ftl.so 00:03:18.923 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.923 CC module/sock/posix/posix.o 00:03:18.923 CC module/accel/ioat/accel_ioat.o 00:03:18.923 CC module/keyring/file/keyring.o 00:03:18.923 CC module/accel/error/accel_error.o 00:03:18.923 CC module/blob/bdev/blob_bdev.o 00:03:18.923 CC module/fsdev/aio/fsdev_aio.o 00:03:18.923 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.923 CC module/keyring/linux/keyring.o 00:03:18.923 CC module/accel/dsa/accel_dsa.o 00:03:18.923 LIB libspdk_env_dpdk_rpc.a 00:03:18.923 SO libspdk_env_dpdk_rpc.so.6.0 00:03:19.181 SYMLINK libspdk_env_dpdk_rpc.so 00:03:19.181 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.181 CC module/keyring/file/keyring_rpc.o 00:03:19.181 CC module/accel/error/accel_error_rpc.o 00:03:19.181 CC module/keyring/linux/keyring_rpc.o 00:03:19.181 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.181 LIB libspdk_scheduler_dynamic.a 00:03:19.181 SO libspdk_scheduler_dynamic.so.4.0 00:03:19.181 LIB libspdk_accel_error.a 00:03:19.181 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:19.181 LIB libspdk_keyring_file.a 00:03:19.181 LIB libspdk_accel_dsa.a 00:03:19.181 LIB libspdk_keyring_linux.a 00:03:19.181 SO libspdk_accel_error.so.2.0 00:03:19.181 LIB libspdk_blob_bdev.a 00:03:19.181 SYMLINK libspdk_scheduler_dynamic.so 00:03:19.181 SO libspdk_accel_dsa.so.5.0 00:03:19.181 SO libspdk_keyring_linux.so.1.0 00:03:19.181 SO libspdk_keyring_file.so.2.0 00:03:19.181 SO libspdk_blob_bdev.so.11.0 00:03:19.181 LIB libspdk_accel_ioat.a 00:03:19.181 SYMLINK libspdk_accel_error.so 00:03:19.181 SYMLINK libspdk_accel_dsa.so 00:03:19.181 SYMLINK libspdk_keyring_file.so 00:03:19.181 SYMLINK libspdk_keyring_linux.so 00:03:19.181 SO libspdk_accel_ioat.so.6.0 00:03:19.181 SYMLINK libspdk_blob_bdev.so 00:03:19.181 CC module/fsdev/aio/linux_aio_mgr.o 00:03:19.439 SYMLINK libspdk_accel_ioat.so 00:03:19.439 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:19.439 CC module/scheduler/gscheduler/gscheduler.o 00:03:19.439 CC module/accel/iaa/accel_iaa.o 00:03:19.439 CC module/accel/iaa/accel_iaa_rpc.o 00:03:19.439 CC module/bdev/delay/vbdev_delay.o 00:03:19.439 CC module/bdev/gpt/gpt.o 00:03:19.439 CC module/bdev/error/vbdev_error.o 00:03:19.439 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.439 LIB libspdk_sock_posix.a 00:03:19.439 LIB libspdk_scheduler_gscheduler.a 00:03:19.439 LIB libspdk_scheduler_dpdk_governor.a 00:03:19.699 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.699 SO libspdk_sock_posix.so.6.0 00:03:19.699 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:19.699 SO libspdk_scheduler_gscheduler.so.4.0 00:03:19.699 LIB libspdk_accel_iaa.a 00:03:19.699 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:19.699 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:19.699 SO libspdk_accel_iaa.so.3.0 00:03:19.699 SYMLINK libspdk_sock_posix.so 00:03:19.699 SYMLINK libspdk_scheduler_gscheduler.so 00:03:19.699 CC module/bdev/error/vbdev_error_rpc.o 00:03:19.699 LIB libspdk_fsdev_aio.a 00:03:19.699 SO libspdk_fsdev_aio.so.1.0 00:03:19.699 SYMLINK libspdk_accel_iaa.so 00:03:19.699 CC module/bdev/gpt/vbdev_gpt.o 00:03:19.699 SYMLINK libspdk_fsdev_aio.so 00:03:19.699 LIB libspdk_blobfs_bdev.a 00:03:19.699 SO libspdk_blobfs_bdev.so.6.0 00:03:19.699 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.699 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:19.699 CC module/bdev/malloc/bdev_malloc.o 00:03:19.699 LIB libspdk_bdev_error.a 00:03:19.699 SYMLINK libspdk_blobfs_bdev.so 00:03:19.699 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:19.959 SO libspdk_bdev_error.so.6.0 00:03:19.959 CC module/bdev/null/bdev_null.o 00:03:19.959 CC module/bdev/passthru/vbdev_passthru.o 00:03:19.959 CC module/bdev/nvme/bdev_nvme.o 00:03:19.959 SYMLINK libspdk_bdev_error.so 00:03:19.959 LIB libspdk_bdev_delay.a 00:03:19.959 SO libspdk_bdev_delay.so.6.0 00:03:19.959 LIB libspdk_bdev_gpt.a 00:03:19.959 SO libspdk_bdev_gpt.so.6.0 00:03:19.959 SYMLINK libspdk_bdev_delay.so 00:03:19.959 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.959 SYMLINK libspdk_bdev_gpt.so 00:03:19.959 CC module/bdev/raid/bdev_raid.o 00:03:19.959 CC module/bdev/null/bdev_null_rpc.o 00:03:20.217 LIB libspdk_bdev_passthru.a 00:03:20.217 CC module/bdev/split/vbdev_split.o 00:03:20.217 SO libspdk_bdev_passthru.so.6.0 00:03:20.217 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:20.217 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:20.217 LIB libspdk_bdev_null.a 00:03:20.217 LIB libspdk_bdev_malloc.a 00:03:20.217 SYMLINK libspdk_bdev_passthru.so 00:03:20.217 SO libspdk_bdev_malloc.so.6.0 00:03:20.217 SO libspdk_bdev_null.so.6.0 00:03:20.217 CC module/bdev/xnvme/bdev_xnvme.o 00:03:20.217 SYMLINK libspdk_bdev_null.so 00:03:20.217 SYMLINK libspdk_bdev_malloc.so 00:03:20.217 CC module/bdev/split/vbdev_split_rpc.o 00:03:20.217 LIB libspdk_bdev_lvol.a 00:03:20.217 SO libspdk_bdev_lvol.so.6.0 00:03:20.217 CC module/bdev/raid/bdev_raid_rpc.o 00:03:20.475 CC module/bdev/aio/bdev_aio.o 00:03:20.475 SYMLINK libspdk_bdev_lvol.so 00:03:20.475 CC module/bdev/ftl/bdev_ftl.o 00:03:20.475 CC module/bdev/iscsi/bdev_iscsi.o 00:03:20.475 LIB libspdk_bdev_split.a 00:03:20.475 SO libspdk_bdev_split.so.6.0 00:03:20.475 LIB libspdk_bdev_zone_block.a 00:03:20.475 SO libspdk_bdev_zone_block.so.6.0 00:03:20.475 SYMLINK libspdk_bdev_split.so 00:03:20.475 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:20.475 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.475 CC module/bdev/nvme/nvme_rpc.o 00:03:20.475 SYMLINK libspdk_bdev_zone_block.so 00:03:20.475 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:20.475 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:20.734 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:20.734 CC module/bdev/aio/bdev_aio_rpc.o 00:03:20.734 LIB libspdk_bdev_xnvme.a 00:03:20.734 SO libspdk_bdev_xnvme.so.3.0 00:03:20.734 LIB libspdk_bdev_ftl.a 00:03:20.734 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:20.734 SO libspdk_bdev_ftl.so.6.0 00:03:20.734 SYMLINK libspdk_bdev_xnvme.so 00:03:20.734 CC module/bdev/raid/bdev_raid_sb.o 00:03:20.734 CC module/bdev/nvme/bdev_mdns_client.o 00:03:20.734 LIB libspdk_bdev_aio.a 00:03:20.734 SYMLINK libspdk_bdev_ftl.so 00:03:20.734 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:20.734 SO libspdk_bdev_aio.so.6.0 00:03:20.992 CC module/bdev/nvme/vbdev_opal.o 00:03:20.992 SYMLINK libspdk_bdev_aio.so 00:03:20.992 CC module/bdev/raid/raid0.o 00:03:20.992 LIB libspdk_bdev_iscsi.a 00:03:20.992 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:20.992 SO libspdk_bdev_iscsi.so.6.0 00:03:20.992 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:20.992 LIB libspdk_bdev_virtio.a 00:03:20.992 SYMLINK libspdk_bdev_iscsi.so 00:03:20.992 CC module/bdev/raid/raid1.o 00:03:20.992 CC module/bdev/raid/concat.o 00:03:20.992 SO libspdk_bdev_virtio.so.6.0 00:03:20.992 SYMLINK libspdk_bdev_virtio.so 00:03:21.250 LIB libspdk_bdev_raid.a 00:03:21.250 SO libspdk_bdev_raid.so.6.0 00:03:21.508 SYMLINK libspdk_bdev_raid.so 00:03:22.453 LIB libspdk_bdev_nvme.a 00:03:22.453 SO libspdk_bdev_nvme.so.7.1 00:03:22.453 SYMLINK libspdk_bdev_nvme.so 00:03:23.019 CC module/event/subsystems/keyring/keyring.o 00:03:23.019 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:23.019 CC module/event/subsystems/vmd/vmd.o 00:03:23.019 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:23.019 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.019 CC module/event/subsystems/sock/sock.o 00:03:23.019 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.019 CC module/event/subsystems/fsdev/fsdev.o 00:03:23.019 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.019 LIB libspdk_event_keyring.a 00:03:23.019 LIB libspdk_event_vhost_blk.a 00:03:23.019 LIB libspdk_event_vmd.a 00:03:23.019 LIB libspdk_event_fsdev.a 00:03:23.019 LIB libspdk_event_sock.a 00:03:23.019 SO libspdk_event_keyring.so.1.0 00:03:23.019 SO libspdk_event_vhost_blk.so.3.0 00:03:23.019 LIB libspdk_event_iobuf.a 00:03:23.019 LIB libspdk_event_scheduler.a 00:03:23.019 SO libspdk_event_vmd.so.6.0 00:03:23.019 SO libspdk_event_fsdev.so.1.0 00:03:23.019 SO libspdk_event_sock.so.5.0 00:03:23.019 SO libspdk_event_scheduler.so.4.0 00:03:23.019 SO libspdk_event_iobuf.so.3.0 00:03:23.019 SYMLINK libspdk_event_keyring.so 00:03:23.019 SYMLINK libspdk_event_vhost_blk.so 00:03:23.019 SYMLINK libspdk_event_fsdev.so 00:03:23.019 SYMLINK libspdk_event_vmd.so 00:03:23.019 SYMLINK libspdk_event_sock.so 00:03:23.019 SYMLINK libspdk_event_iobuf.so 00:03:23.019 SYMLINK libspdk_event_scheduler.so 00:03:23.277 CC module/event/subsystems/accel/accel.o 00:03:23.277 LIB libspdk_event_accel.a 00:03:23.539 SO libspdk_event_accel.so.6.0 00:03:23.539 SYMLINK libspdk_event_accel.so 00:03:23.816 CC module/event/subsystems/bdev/bdev.o 00:03:23.817 LIB libspdk_event_bdev.a 00:03:23.817 SO libspdk_event_bdev.so.6.0 00:03:23.817 SYMLINK libspdk_event_bdev.so 00:03:24.075 CC module/event/subsystems/scsi/scsi.o 00:03:24.075 CC module/event/subsystems/ublk/ublk.o 00:03:24.075 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.075 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:24.075 CC module/event/subsystems/nbd/nbd.o 00:03:24.075 LIB libspdk_event_scsi.a 00:03:24.333 LIB libspdk_event_ublk.a 00:03:24.333 SO libspdk_event_scsi.so.6.0 00:03:24.333 LIB libspdk_event_nbd.a 00:03:24.333 SO libspdk_event_ublk.so.3.0 00:03:24.333 SO libspdk_event_nbd.so.6.0 00:03:24.333 SYMLINK libspdk_event_scsi.so 00:03:24.333 SYMLINK libspdk_event_ublk.so 00:03:24.333 SYMLINK libspdk_event_nbd.so 00:03:24.333 LIB libspdk_event_nvmf.a 00:03:24.333 SO libspdk_event_nvmf.so.6.0 00:03:24.333 SYMLINK libspdk_event_nvmf.so 00:03:24.333 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.591 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.592 LIB libspdk_event_iscsi.a 00:03:24.592 LIB libspdk_event_vhost_scsi.a 00:03:24.592 SO libspdk_event_vhost_scsi.so.3.0 00:03:24.592 SO libspdk_event_iscsi.so.6.0 00:03:24.592 SYMLINK libspdk_event_vhost_scsi.so 00:03:24.592 SYMLINK libspdk_event_iscsi.so 00:03:24.849 SO libspdk.so.6.0 00:03:24.849 SYMLINK libspdk.so 00:03:25.107 CC test/rpc_client/rpc_client_test.o 00:03:25.107 CXX app/trace/trace.o 00:03:25.107 TEST_HEADER include/spdk/accel.h 00:03:25.107 TEST_HEADER include/spdk/accel_module.h 00:03:25.107 TEST_HEADER include/spdk/assert.h 00:03:25.107 TEST_HEADER include/spdk/barrier.h 00:03:25.107 TEST_HEADER include/spdk/base64.h 00:03:25.107 TEST_HEADER include/spdk/bdev.h 00:03:25.107 TEST_HEADER include/spdk/bdev_module.h 00:03:25.107 TEST_HEADER include/spdk/bdev_zone.h 00:03:25.107 TEST_HEADER include/spdk/bit_array.h 00:03:25.107 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.107 TEST_HEADER include/spdk/bit_pool.h 00:03:25.107 TEST_HEADER include/spdk/blob_bdev.h 00:03:25.107 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:25.107 TEST_HEADER include/spdk/blobfs.h 00:03:25.107 TEST_HEADER include/spdk/blob.h 00:03:25.107 TEST_HEADER include/spdk/conf.h 00:03:25.107 TEST_HEADER include/spdk/config.h 00:03:25.107 TEST_HEADER include/spdk/cpuset.h 00:03:25.107 TEST_HEADER include/spdk/crc16.h 00:03:25.107 TEST_HEADER include/spdk/crc32.h 00:03:25.107 CC examples/ioat/perf/perf.o 00:03:25.107 TEST_HEADER include/spdk/crc64.h 00:03:25.107 TEST_HEADER include/spdk/dif.h 00:03:25.107 TEST_HEADER include/spdk/dma.h 00:03:25.107 CC test/thread/poller_perf/poller_perf.o 00:03:25.107 TEST_HEADER include/spdk/endian.h 00:03:25.107 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.107 CC examples/util/zipf/zipf.o 00:03:25.107 TEST_HEADER include/spdk/env.h 00:03:25.107 TEST_HEADER include/spdk/event.h 00:03:25.107 TEST_HEADER include/spdk/fd_group.h 00:03:25.107 TEST_HEADER include/spdk/fd.h 00:03:25.107 TEST_HEADER include/spdk/file.h 00:03:25.107 TEST_HEADER include/spdk/fsdev.h 00:03:25.107 TEST_HEADER include/spdk/fsdev_module.h 00:03:25.107 TEST_HEADER include/spdk/ftl.h 00:03:25.107 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:25.107 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.107 TEST_HEADER include/spdk/hexlify.h 00:03:25.107 TEST_HEADER include/spdk/histogram_data.h 00:03:25.107 TEST_HEADER include/spdk/idxd.h 00:03:25.107 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.107 TEST_HEADER include/spdk/init.h 00:03:25.107 TEST_HEADER include/spdk/ioat.h 00:03:25.107 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.107 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.107 TEST_HEADER include/spdk/json.h 00:03:25.107 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.107 TEST_HEADER include/spdk/keyring.h 00:03:25.107 TEST_HEADER include/spdk/keyring_module.h 00:03:25.107 TEST_HEADER include/spdk/likely.h 00:03:25.107 TEST_HEADER include/spdk/log.h 00:03:25.107 TEST_HEADER include/spdk/lvol.h 00:03:25.107 CC test/app/bdev_svc/bdev_svc.o 00:03:25.107 TEST_HEADER include/spdk/md5.h 00:03:25.107 TEST_HEADER include/spdk/memory.h 00:03:25.107 TEST_HEADER include/spdk/mmio.h 00:03:25.108 CC test/dma/test_dma/test_dma.o 00:03:25.108 TEST_HEADER include/spdk/nbd.h 00:03:25.108 TEST_HEADER include/spdk/net.h 00:03:25.108 TEST_HEADER include/spdk/notify.h 00:03:25.108 TEST_HEADER include/spdk/nvme.h 00:03:25.108 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.108 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.108 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.108 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.108 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.108 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.108 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.108 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.108 TEST_HEADER include/spdk/nvmf.h 00:03:25.108 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.108 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.108 TEST_HEADER include/spdk/opal.h 00:03:25.108 TEST_HEADER include/spdk/opal_spec.h 00:03:25.108 TEST_HEADER include/spdk/pci_ids.h 00:03:25.108 TEST_HEADER include/spdk/pipe.h 00:03:25.108 TEST_HEADER include/spdk/queue.h 00:03:25.108 TEST_HEADER include/spdk/reduce.h 00:03:25.108 TEST_HEADER include/spdk/rpc.h 00:03:25.108 TEST_HEADER include/spdk/scheduler.h 00:03:25.108 TEST_HEADER include/spdk/scsi.h 00:03:25.108 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.108 TEST_HEADER include/spdk/sock.h 00:03:25.108 TEST_HEADER include/spdk/stdinc.h 00:03:25.108 TEST_HEADER include/spdk/string.h 00:03:25.108 TEST_HEADER include/spdk/thread.h 00:03:25.108 TEST_HEADER include/spdk/trace.h 00:03:25.108 TEST_HEADER include/spdk/trace_parser.h 00:03:25.108 TEST_HEADER include/spdk/tree.h 00:03:25.108 TEST_HEADER include/spdk/ublk.h 00:03:25.108 LINK rpc_client_test 00:03:25.108 TEST_HEADER include/spdk/util.h 00:03:25.108 TEST_HEADER include/spdk/uuid.h 00:03:25.108 TEST_HEADER include/spdk/version.h 00:03:25.108 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.108 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.108 TEST_HEADER include/spdk/vhost.h 00:03:25.108 TEST_HEADER include/spdk/vmd.h 00:03:25.108 TEST_HEADER include/spdk/xor.h 00:03:25.108 TEST_HEADER include/spdk/zipf.h 00:03:25.108 CXX test/cpp_headers/accel.o 00:03:25.108 LINK interrupt_tgt 00:03:25.108 LINK zipf 00:03:25.108 LINK poller_perf 00:03:25.366 LINK bdev_svc 00:03:25.366 LINK ioat_perf 00:03:25.366 CXX test/cpp_headers/accel_module.o 00:03:25.366 LINK spdk_trace 00:03:25.366 CC test/env/vtophys/vtophys.o 00:03:25.366 CC examples/ioat/verify/verify.o 00:03:25.366 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.366 CC test/env/memory/memory_ut.o 00:03:25.366 CXX test/cpp_headers/assert.o 00:03:25.624 LINK test_dma 00:03:25.624 LINK vtophys 00:03:25.624 CC examples/thread/thread/thread_ex.o 00:03:25.624 LINK env_dpdk_post_init 00:03:25.624 CC app/trace_record/trace_record.o 00:03:25.624 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.624 LINK verify 00:03:25.624 LINK mem_callbacks 00:03:25.624 CXX test/cpp_headers/barrier.o 00:03:25.624 CXX test/cpp_headers/base64.o 00:03:25.624 CXX test/cpp_headers/bdev.o 00:03:25.624 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.883 LINK thread 00:03:25.883 LINK spdk_trace_record 00:03:25.883 CXX test/cpp_headers/bdev_module.o 00:03:25.883 CC test/event/event_perf/event_perf.o 00:03:25.883 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.883 CC test/nvme/aer/aer.o 00:03:25.883 LINK nvme_fuzz 00:03:25.883 CXX test/cpp_headers/bdev_zone.o 00:03:25.883 CC examples/sock/hello_world/hello_sock.o 00:03:25.883 LINK event_perf 00:03:25.883 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.142 CXX test/cpp_headers/bit_array.o 00:03:26.142 CC app/nvmf_tgt/nvmf_main.o 00:03:26.142 CC test/app/histogram_perf/histogram_perf.o 00:03:26.142 LINK hello_sock 00:03:26.142 CC test/event/reactor/reactor.o 00:03:26.142 CC test/app/jsoncat/jsoncat.o 00:03:26.142 LINK aer 00:03:26.142 CXX test/cpp_headers/bit_pool.o 00:03:26.142 LINK nvmf_tgt 00:03:26.142 LINK histogram_perf 00:03:26.401 LINK reactor 00:03:26.401 LINK jsoncat 00:03:26.401 CXX test/cpp_headers/blob_bdev.o 00:03:26.401 LINK vhost_fuzz 00:03:26.401 CC examples/vmd/lsvmd/lsvmd.o 00:03:26.401 CC test/nvme/reset/reset.o 00:03:26.401 CC test/app/stub/stub.o 00:03:26.401 CC test/event/reactor_perf/reactor_perf.o 00:03:26.401 CC test/env/pci/pci_ut.o 00:03:26.401 CC app/iscsi_tgt/iscsi_tgt.o 00:03:26.659 CXX test/cpp_headers/blobfs_bdev.o 00:03:26.659 LINK memory_ut 00:03:26.659 LINK lsvmd 00:03:26.659 LINK reactor_perf 00:03:26.659 LINK stub 00:03:26.659 CC test/nvme/sgl/sgl.o 00:03:26.659 CXX test/cpp_headers/blobfs.o 00:03:26.659 LINK reset 00:03:26.659 LINK iscsi_tgt 00:03:26.659 CC examples/vmd/led/led.o 00:03:26.659 CC test/nvme/e2edp/nvme_dp.o 00:03:26.659 CC test/event/app_repeat/app_repeat.o 00:03:26.659 CXX test/cpp_headers/blob.o 00:03:26.916 CC test/event/scheduler/scheduler.o 00:03:26.916 CXX test/cpp_headers/conf.o 00:03:26.916 LINK pci_ut 00:03:26.916 LINK sgl 00:03:26.916 LINK app_repeat 00:03:26.916 LINK led 00:03:26.916 CXX test/cpp_headers/config.o 00:03:26.916 CC app/spdk_lspci/spdk_lspci.o 00:03:26.916 CXX test/cpp_headers/cpuset.o 00:03:26.916 CXX test/cpp_headers/crc16.o 00:03:26.916 CXX test/cpp_headers/crc32.o 00:03:26.916 CC app/spdk_tgt/spdk_tgt.o 00:03:26.916 LINK nvme_dp 00:03:26.916 LINK scheduler 00:03:27.175 CXX test/cpp_headers/crc64.o 00:03:27.175 LINK spdk_lspci 00:03:27.175 CC test/accel/dif/dif.o 00:03:27.175 CXX test/cpp_headers/dif.o 00:03:27.175 CC test/nvme/overhead/overhead.o 00:03:27.175 CXX test/cpp_headers/dma.o 00:03:27.175 LINK iscsi_fuzz 00:03:27.175 CC examples/idxd/perf/perf.o 00:03:27.175 CC test/nvme/err_injection/err_injection.o 00:03:27.175 LINK spdk_tgt 00:03:27.175 CXX test/cpp_headers/endian.o 00:03:27.175 CC test/nvme/startup/startup.o 00:03:27.433 CXX test/cpp_headers/env_dpdk.o 00:03:27.433 CC test/nvme/reserve/reserve.o 00:03:27.433 LINK err_injection 00:03:27.433 CC test/nvme/simple_copy/simple_copy.o 00:03:27.433 CXX test/cpp_headers/env.o 00:03:27.433 LINK overhead 00:03:27.433 CC app/spdk_nvme_perf/perf.o 00:03:27.433 LINK idxd_perf 00:03:27.433 LINK startup 00:03:27.433 CXX test/cpp_headers/event.o 00:03:27.433 LINK reserve 00:03:27.433 CXX test/cpp_headers/fd_group.o 00:03:27.433 CC app/spdk_nvme_identify/identify.o 00:03:27.691 CC app/spdk_nvme_discover/discovery_aer.o 00:03:27.691 LINK simple_copy 00:03:27.691 CXX test/cpp_headers/fd.o 00:03:27.691 CC examples/accel/perf/accel_perf.o 00:03:27.691 CXX test/cpp_headers/file.o 00:03:27.691 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:27.691 LINK spdk_nvme_discover 00:03:27.691 CC test/nvme/connect_stress/connect_stress.o 00:03:27.691 CXX test/cpp_headers/fsdev.o 00:03:27.691 CC test/nvme/boot_partition/boot_partition.o 00:03:27.950 CC test/blobfs/mkfs/mkfs.o 00:03:27.950 LINK dif 00:03:27.950 LINK hello_fsdev 00:03:27.950 LINK connect_stress 00:03:27.950 CXX test/cpp_headers/fsdev_module.o 00:03:27.950 LINK boot_partition 00:03:27.950 CC test/nvme/compliance/nvme_compliance.o 00:03:27.950 LINK mkfs 00:03:27.950 LINK accel_perf 00:03:27.950 CXX test/cpp_headers/ftl.o 00:03:27.950 LINK spdk_nvme_perf 00:03:27.950 CC test/nvme/fused_ordering/fused_ordering.o 00:03:28.208 CC examples/nvme/hello_world/hello_world.o 00:03:28.208 CC examples/blob/hello_world/hello_blob.o 00:03:28.208 CXX test/cpp_headers/fuse_dispatcher.o 00:03:28.208 CC app/spdk_top/spdk_top.o 00:03:28.208 CC test/lvol/esnap/esnap.o 00:03:28.208 LINK fused_ordering 00:03:28.208 LINK nvme_compliance 00:03:28.208 LINK hello_world 00:03:28.208 CC test/bdev/bdevio/bdevio.o 00:03:28.208 LINK hello_blob 00:03:28.208 CXX test/cpp_headers/gpt_spec.o 00:03:28.466 LINK spdk_nvme_identify 00:03:28.466 CC examples/bdev/hello_world/hello_bdev.o 00:03:28.466 CC app/vhost/vhost.o 00:03:28.466 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:28.466 CXX test/cpp_headers/hexlify.o 00:03:28.466 CC examples/nvme/reconnect/reconnect.o 00:03:28.466 CC examples/blob/cli/blobcli.o 00:03:28.466 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:28.466 CXX test/cpp_headers/histogram_data.o 00:03:28.725 LINK vhost 00:03:28.725 LINK doorbell_aers 00:03:28.725 LINK hello_bdev 00:03:28.725 CXX test/cpp_headers/idxd.o 00:03:28.725 LINK bdevio 00:03:28.725 CC test/nvme/fdp/fdp.o 00:03:28.725 CXX test/cpp_headers/idxd_spec.o 00:03:28.725 CC test/nvme/cuse/cuse.o 00:03:28.725 LINK reconnect 00:03:28.725 CXX test/cpp_headers/init.o 00:03:28.982 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.982 LINK blobcli 00:03:28.982 CXX test/cpp_headers/ioat.o 00:03:28.982 CC examples/nvme/arbitration/arbitration.o 00:03:28.982 CC examples/nvme/hotplug/hotplug.o 00:03:28.982 LINK fdp 00:03:28.982 LINK nvme_manage 00:03:28.982 CXX test/cpp_headers/ioat_spec.o 00:03:28.982 LINK spdk_top 00:03:28.982 CXX test/cpp_headers/iscsi_spec.o 00:03:29.240 CXX test/cpp_headers/json.o 00:03:29.240 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:29.240 LINK hotplug 00:03:29.240 CC examples/nvme/abort/abort.o 00:03:29.240 CXX test/cpp_headers/jsonrpc.o 00:03:29.240 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:29.240 LINK arbitration 00:03:29.240 CC app/spdk_dd/spdk_dd.o 00:03:29.240 LINK cmb_copy 00:03:29.240 CXX test/cpp_headers/keyring.o 00:03:29.498 CXX test/cpp_headers/keyring_module.o 00:03:29.498 CXX test/cpp_headers/likely.o 00:03:29.498 CXX test/cpp_headers/log.o 00:03:29.498 LINK pmr_persistence 00:03:29.498 CC app/fio/nvme/fio_plugin.o 00:03:29.498 CXX test/cpp_headers/lvol.o 00:03:29.498 CXX test/cpp_headers/md5.o 00:03:29.499 CXX test/cpp_headers/memory.o 00:03:29.499 LINK spdk_dd 00:03:29.499 LINK abort 00:03:29.757 CC app/fio/bdev/fio_plugin.o 00:03:29.757 CXX test/cpp_headers/mmio.o 00:03:29.757 LINK bdevperf 00:03:29.757 CXX test/cpp_headers/nbd.o 00:03:29.757 CXX test/cpp_headers/net.o 00:03:29.757 CXX test/cpp_headers/notify.o 00:03:29.757 CXX test/cpp_headers/nvme.o 00:03:29.757 CXX test/cpp_headers/nvme_intel.o 00:03:29.757 CXX test/cpp_headers/nvme_ocssd.o 00:03:29.757 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:29.757 CXX test/cpp_headers/nvme_spec.o 00:03:29.757 LINK cuse 00:03:30.015 CXX test/cpp_headers/nvme_zns.o 00:03:30.015 CXX test/cpp_headers/nvmf_cmd.o 00:03:30.015 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:30.015 CXX test/cpp_headers/nvmf.o 00:03:30.015 CXX test/cpp_headers/nvmf_spec.o 00:03:30.015 CXX test/cpp_headers/nvmf_transport.o 00:03:30.015 CC examples/nvmf/nvmf/nvmf.o 00:03:30.015 LINK spdk_bdev 00:03:30.015 CXX test/cpp_headers/opal.o 00:03:30.015 CXX test/cpp_headers/opal_spec.o 00:03:30.015 LINK spdk_nvme 00:03:30.015 CXX test/cpp_headers/pci_ids.o 00:03:30.274 CXX test/cpp_headers/pipe.o 00:03:30.274 CXX test/cpp_headers/queue.o 00:03:30.274 CXX test/cpp_headers/reduce.o 00:03:30.274 CXX test/cpp_headers/rpc.o 00:03:30.274 CXX test/cpp_headers/scheduler.o 00:03:30.274 CXX test/cpp_headers/scsi.o 00:03:30.274 CXX test/cpp_headers/scsi_spec.o 00:03:30.274 CXX test/cpp_headers/sock.o 00:03:30.274 CXX test/cpp_headers/stdinc.o 00:03:30.274 CXX test/cpp_headers/string.o 00:03:30.274 LINK nvmf 00:03:30.274 CXX test/cpp_headers/thread.o 00:03:30.274 CXX test/cpp_headers/trace.o 00:03:30.274 CXX test/cpp_headers/trace_parser.o 00:03:30.274 CXX test/cpp_headers/tree.o 00:03:30.274 CXX test/cpp_headers/ublk.o 00:03:30.274 CXX test/cpp_headers/util.o 00:03:30.274 CXX test/cpp_headers/uuid.o 00:03:30.274 CXX test/cpp_headers/version.o 00:03:30.274 CXX test/cpp_headers/vfio_user_pci.o 00:03:30.533 CXX test/cpp_headers/vfio_user_spec.o 00:03:30.533 CXX test/cpp_headers/vhost.o 00:03:30.533 CXX test/cpp_headers/vmd.o 00:03:30.533 CXX test/cpp_headers/xor.o 00:03:30.533 CXX test/cpp_headers/zipf.o 00:03:32.440 LINK esnap 00:03:33.012 00:03:33.012 real 1m9.420s 00:03:33.012 user 6m24.329s 00:03:33.012 sys 1m14.330s 00:03:33.012 09:32:00 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:33.012 09:32:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.012 ************************************ 00:03:33.012 END TEST make 00:03:33.012 ************************************ 00:03:33.012 09:32:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.012 09:32:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.013 09:32:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.013 09:32:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.013 09:32:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.013 09:32:00 -- pm/common@44 -- $ pid=5067 00:03:33.013 09:32:00 -- pm/common@50 -- $ kill -TERM 5067 00:03:33.013 09:32:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.013 09:32:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.013 09:32:00 -- pm/common@44 -- $ pid=5068 00:03:33.013 09:32:00 -- pm/common@50 -- $ kill -TERM 5068 00:03:33.013 09:32:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.013 09:32:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.013 09:32:00 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:33.013 09:32:00 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:33.013 09:32:00 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:33.013 09:32:00 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:33.013 09:32:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.013 09:32:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.013 09:32:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.013 09:32:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.013 09:32:00 -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.013 09:32:00 -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.013 09:32:00 -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.013 09:32:00 -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.013 09:32:00 -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.013 09:32:00 -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.013 09:32:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.013 09:32:00 -- scripts/common.sh@344 -- # case "$op" in 00:03:33.013 09:32:00 -- scripts/common.sh@345 -- # : 1 00:03:33.013 09:32:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.013 09:32:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.013 09:32:00 -- scripts/common.sh@365 -- # decimal 1 00:03:33.013 09:32:00 -- scripts/common.sh@353 -- # local d=1 00:03:33.013 09:32:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.013 09:32:00 -- scripts/common.sh@355 -- # echo 1 00:03:33.013 09:32:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.013 09:32:00 -- scripts/common.sh@366 -- # decimal 2 00:03:33.013 09:32:00 -- scripts/common.sh@353 -- # local d=2 00:03:33.013 09:32:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.013 09:32:00 -- scripts/common.sh@355 -- # echo 2 00:03:33.013 09:32:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.013 09:32:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.013 09:32:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.013 09:32:00 -- scripts/common.sh@368 -- # return 0 00:03:33.013 09:32:00 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.013 09:32:00 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.013 --rc genhtml_branch_coverage=1 00:03:33.013 --rc genhtml_function_coverage=1 00:03:33.013 --rc genhtml_legend=1 00:03:33.013 --rc geninfo_all_blocks=1 00:03:33.013 --rc geninfo_unexecuted_blocks=1 00:03:33.013 00:03:33.013 ' 00:03:33.013 09:32:00 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.013 --rc genhtml_branch_coverage=1 00:03:33.013 --rc genhtml_function_coverage=1 00:03:33.013 --rc genhtml_legend=1 00:03:33.013 --rc geninfo_all_blocks=1 00:03:33.013 --rc geninfo_unexecuted_blocks=1 00:03:33.013 00:03:33.013 ' 00:03:33.013 09:32:00 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.013 --rc genhtml_branch_coverage=1 00:03:33.013 --rc genhtml_function_coverage=1 00:03:33.013 --rc genhtml_legend=1 00:03:33.013 --rc geninfo_all_blocks=1 00:03:33.013 --rc geninfo_unexecuted_blocks=1 00:03:33.013 00:03:33.013 ' 00:03:33.013 09:32:00 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.013 --rc genhtml_branch_coverage=1 00:03:33.013 --rc genhtml_function_coverage=1 00:03:33.013 --rc genhtml_legend=1 00:03:33.013 --rc geninfo_all_blocks=1 00:03:33.013 --rc geninfo_unexecuted_blocks=1 00:03:33.013 00:03:33.013 ' 00:03:33.013 09:32:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.013 09:32:00 -- nvmf/common.sh@7 -- # uname -s 00:03:33.013 09:32:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.013 09:32:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.013 09:32:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.013 09:32:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.013 09:32:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.013 09:32:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.013 09:32:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.013 09:32:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.013 09:32:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.013 09:32:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.013 09:32:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ad18ec5b-c807-48e1-8f0a-2ea67531be3c 00:03:33.013 09:32:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=ad18ec5b-c807-48e1-8f0a-2ea67531be3c 00:03:33.013 09:32:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.013 09:32:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.013 09:32:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:33.013 09:32:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.013 09:32:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.013 09:32:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.013 09:32:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.013 09:32:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.013 09:32:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.013 09:32:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.013 09:32:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.013 09:32:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.013 09:32:00 -- paths/export.sh@5 -- # export PATH 00:03:33.013 09:32:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.013 09:32:00 -- nvmf/common.sh@51 -- # : 0 00:03:33.013 09:32:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.013 09:32:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.013 09:32:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.013 09:32:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.013 09:32:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.013 09:32:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.013 09:32:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:33.013 09:32:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:33.013 09:32:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:33.013 09:32:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.275 09:32:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.275 09:32:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.275 09:32:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.275 09:32:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.275 09:32:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.275 09:32:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.275 09:32:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.275 09:32:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.275 09:32:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.275 09:32:00 -- spdk/autotest.sh@48 -- # udevadm_pid=54279 00:03:33.275 09:32:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.275 09:32:00 -- pm/common@17 -- # local monitor 00:03:33.275 09:32:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.275 09:32:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.275 09:32:00 -- pm/common@25 -- # sleep 1 00:03:33.275 09:32:00 -- pm/common@21 -- # date +%s 00:03:33.275 09:32:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.275 09:32:00 -- pm/common@21 -- # date +%s 00:03:33.275 09:32:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730971920 00:03:33.275 09:32:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730971920 00:03:33.275 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730971920_collect-cpu-load.pm.log 00:03:33.275 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730971920_collect-vmstat.pm.log 00:03:34.217 09:32:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.217 09:32:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.217 09:32:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.217 09:32:01 -- common/autotest_common.sh@10 -- # set +x 00:03:34.217 09:32:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.217 09:32:01 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:34.217 09:32:01 -- common/autotest_common.sh@10 -- # set +x 00:03:34.217 09:32:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.217 09:32:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.217 09:32:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.217 09:32:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.217 09:32:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.217 09:32:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.217 09:32:01 -- common/autotest_common.sh@1455 -- # uname 00:03:34.217 09:32:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:34.217 09:32:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.217 09:32:01 -- common/autotest_common.sh@1475 -- # uname 00:03:34.217 09:32:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:34.217 09:32:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:34.217 09:32:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:34.217 lcov: LCOV version 1.15 00:03:34.477 09:32:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:49.379 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.379 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:07.512 09:32:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:07.512 09:32:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.512 09:32:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.512 09:32:32 -- spdk/autotest.sh@78 -- # rm -f 00:04:07.512 09:32:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.512 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:07.512 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:07.512 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:07.512 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:07.512 09:32:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:07.512 09:32:33 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:07.512 09:32:33 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:07.512 09:32:33 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:07.512 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.512 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.512 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.512 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.512 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.512 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:07.512 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.512 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:04:07.512 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:04:07.512 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:07.512 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.513 09:32:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:07.513 09:32:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:04:07.513 09:32:33 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:04:07.513 09:32:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:07.513 09:32:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:07.513 09:32:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.513 09:32:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:07.513 09:32:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:07.513 09:32:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:07.513 No valid GPT data, bailing 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # pt= 00:04:07.513 09:32:33 -- scripts/common.sh@395 -- # return 1 00:04:07.513 09:32:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:07.513 1+0 records in 00:04:07.513 1+0 records out 00:04:07.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182849 s, 57.3 MB/s 00:04:07.513 09:32:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.513 09:32:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:07.513 09:32:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:07.513 09:32:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:07.513 No valid GPT data, bailing 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # pt= 00:04:07.513 09:32:33 -- scripts/common.sh@395 -- # return 1 00:04:07.513 09:32:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:07.513 1+0 records in 00:04:07.513 1+0 records out 00:04:07.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551724 s, 190 MB/s 00:04:07.513 09:32:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.513 09:32:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:07.513 09:32:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:07.513 09:32:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:07.513 No valid GPT data, bailing 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # pt= 00:04:07.513 09:32:33 -- scripts/common.sh@395 -- # return 1 00:04:07.513 09:32:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:07.513 1+0 records in 00:04:07.513 1+0 records out 00:04:07.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482945 s, 217 MB/s 00:04:07.513 09:32:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.513 09:32:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:07.513 09:32:33 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:07.513 09:32:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:07.513 No valid GPT data, bailing 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # pt= 00:04:07.513 09:32:33 -- scripts/common.sh@395 -- # return 1 00:04:07.513 09:32:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:07.513 1+0 records in 00:04:07.513 1+0 records out 00:04:07.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526401 s, 199 MB/s 00:04:07.513 09:32:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.513 09:32:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:07.513 09:32:33 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:07.513 09:32:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:07.513 No valid GPT data, bailing 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # pt= 00:04:07.513 09:32:33 -- scripts/common.sh@395 -- # return 1 00:04:07.513 09:32:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:07.513 1+0 records in 00:04:07.513 1+0 records out 00:04:07.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555996 s, 189 MB/s 00:04:07.513 09:32:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.513 09:32:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.513 09:32:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:07.513 09:32:33 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:07.513 09:32:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:07.513 No valid GPT data, bailing 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:07.513 09:32:33 -- scripts/common.sh@394 -- # pt= 00:04:07.513 09:32:33 -- scripts/common.sh@395 -- # return 1 00:04:07.513 09:32:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:07.513 1+0 records in 00:04:07.513 1+0 records out 00:04:07.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00559319 s, 187 MB/s 00:04:07.513 09:32:33 -- spdk/autotest.sh@105 -- # sync 00:04:07.513 09:32:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:07.513 09:32:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:07.513 09:32:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.775 09:32:35 -- spdk/autotest.sh@111 -- # uname -s 00:04:07.775 09:32:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:07.775 09:32:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:07.775 09:32:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.665 Hugepages 00:04:08.665 node hugesize free / total 00:04:08.665 node0 1048576kB 0 / 0 00:04:08.665 node0 2048kB 0 / 0 00:04:08.665 00:04:08.665 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.665 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.665 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:08.926 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:08.926 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:08.926 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:08.926 09:32:36 -- spdk/autotest.sh@117 -- # uname -s 00:04:08.926 09:32:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:08.926 09:32:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:08.926 09:32:36 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.067 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.067 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.067 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.067 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.067 09:32:37 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:11.011 09:32:38 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:11.011 09:32:38 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:11.011 09:32:38 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.011 09:32:38 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:11.011 09:32:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:11.011 09:32:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:11.011 09:32:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.011 09:32:38 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.011 09:32:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:11.011 09:32:38 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:11.012 09:32:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:11.012 09:32:38 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.533 Waiting for block devices as requested 00:04:11.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.794 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.794 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.087 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:17.087 09:32:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.087 09:32:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:17.087 09:32:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:17.087 09:32:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1541 -- # continue 00:04:17.087 09:32:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:17.087 09:32:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1541 -- # continue 00:04:17.087 09:32:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:17.087 09:32:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1541 -- # continue 00:04:17.087 09:32:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:17.087 09:32:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:17.087 09:32:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:17.087 09:32:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:17.087 09:32:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:17.087 09:32:44 -- common/autotest_common.sh@1541 -- # continue 00:04:17.087 09:32:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:17.087 09:32:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.087 09:32:44 -- common/autotest_common.sh@10 -- # set +x 00:04:17.087 09:32:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:17.087 09:32:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.087 09:32:44 -- common/autotest_common.sh@10 -- # set +x 00:04:17.087 09:32:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.923 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.923 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.184 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.184 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.184 09:32:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:18.184 09:32:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.184 09:32:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.184 09:32:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:18.184 09:32:45 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:18.184 09:32:45 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.184 09:32:45 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:18.184 09:32:45 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:18.184 09:32:45 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:18.184 09:32:45 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:18.184 09:32:45 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:18.184 09:32:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:18.184 09:32:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:18.184 09:32:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.184 09:32:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.184 09:32:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:18.446 09:32:45 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:18.446 09:32:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:18.446 09:32:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:18.446 09:32:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.446 09:32:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:18.446 09:32:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.446 09:32:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:18.446 09:32:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.446 09:32:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:18.446 09:32:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:18.446 09:32:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.446 09:32:45 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:18.446 09:32:45 -- common/autotest_common.sh@1570 -- # return 0 00:04:18.446 09:32:45 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:18.446 09:32:45 -- common/autotest_common.sh@1578 -- # return 0 00:04:18.446 09:32:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:18.446 09:32:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:18.446 09:32:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:18.446 09:32:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:18.446 09:32:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:18.446 09:32:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.446 09:32:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.446 09:32:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:18.446 09:32:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.446 09:32:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.446 09:32:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.446 09:32:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.446 ************************************ 00:04:18.446 START TEST env 00:04:18.446 ************************************ 00:04:18.446 09:32:45 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.446 * Looking for test storage... 00:04:18.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:18.446 09:32:45 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.446 09:32:45 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.446 09:32:45 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.446 09:32:46 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.446 09:32:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.446 09:32:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.446 09:32:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.446 09:32:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.446 09:32:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.446 09:32:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.446 09:32:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.446 09:32:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.446 09:32:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.446 09:32:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.446 09:32:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.446 09:32:46 env -- scripts/common.sh@344 -- # case "$op" in 00:04:18.446 09:32:46 env -- scripts/common.sh@345 -- # : 1 00:04:18.446 09:32:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.446 09:32:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.446 09:32:46 env -- scripts/common.sh@365 -- # decimal 1 00:04:18.446 09:32:46 env -- scripts/common.sh@353 -- # local d=1 00:04:18.446 09:32:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.446 09:32:46 env -- scripts/common.sh@355 -- # echo 1 00:04:18.446 09:32:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.446 09:32:46 env -- scripts/common.sh@366 -- # decimal 2 00:04:18.446 09:32:46 env -- scripts/common.sh@353 -- # local d=2 00:04:18.446 09:32:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.446 09:32:46 env -- scripts/common.sh@355 -- # echo 2 00:04:18.446 09:32:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.446 09:32:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.446 09:32:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.446 09:32:46 env -- scripts/common.sh@368 -- # return 0 00:04:18.446 09:32:46 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.446 09:32:46 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.446 --rc genhtml_branch_coverage=1 00:04:18.446 --rc genhtml_function_coverage=1 00:04:18.446 --rc genhtml_legend=1 00:04:18.446 --rc geninfo_all_blocks=1 00:04:18.446 --rc geninfo_unexecuted_blocks=1 00:04:18.446 00:04:18.446 ' 00:04:18.447 09:32:46 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.447 --rc genhtml_branch_coverage=1 00:04:18.447 --rc genhtml_function_coverage=1 00:04:18.447 --rc genhtml_legend=1 00:04:18.447 --rc geninfo_all_blocks=1 00:04:18.447 --rc geninfo_unexecuted_blocks=1 00:04:18.447 00:04:18.447 ' 00:04:18.447 09:32:46 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.447 --rc genhtml_branch_coverage=1 00:04:18.447 --rc genhtml_function_coverage=1 00:04:18.447 --rc genhtml_legend=1 00:04:18.447 --rc geninfo_all_blocks=1 00:04:18.447 --rc geninfo_unexecuted_blocks=1 00:04:18.447 00:04:18.447 ' 00:04:18.447 09:32:46 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.447 --rc genhtml_branch_coverage=1 00:04:18.447 --rc genhtml_function_coverage=1 00:04:18.447 --rc genhtml_legend=1 00:04:18.447 --rc geninfo_all_blocks=1 00:04:18.447 --rc geninfo_unexecuted_blocks=1 00:04:18.447 00:04:18.447 ' 00:04:18.447 09:32:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.447 09:32:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.447 09:32:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.447 09:32:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.447 ************************************ 00:04:18.447 START TEST env_memory 00:04:18.447 ************************************ 00:04:18.447 09:32:46 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.447 00:04:18.447 00:04:18.447 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.447 http://cunit.sourceforge.net/ 00:04:18.447 00:04:18.447 00:04:18.447 Suite: memory 00:04:18.708 Test: alloc and free memory map ...[2024-11-07 09:32:46.143647] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.708 passed 00:04:18.708 Test: mem map translation ...[2024-11-07 09:32:46.182496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.708 [2024-11-07 09:32:46.182544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.708 [2024-11-07 09:32:46.182606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.709 [2024-11-07 09:32:46.182621] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.709 passed 00:04:18.709 Test: mem map registration ...[2024-11-07 09:32:46.250830] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:18.709 [2024-11-07 09:32:46.250877] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:18.709 passed 00:04:18.709 Test: mem map adjacent registrations ...passed 00:04:18.709 00:04:18.709 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.709 suites 1 1 n/a 0 0 00:04:18.709 tests 4 4 4 0 0 00:04:18.709 asserts 152 152 152 0 n/a 00:04:18.709 00:04:18.709 Elapsed time = 0.235 seconds 00:04:18.709 00:04:18.709 real 0m0.275s 00:04:18.709 user 0m0.246s 00:04:18.709 sys 0m0.021s 00:04:18.709 09:32:46 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.709 ************************************ 00:04:18.709 END TEST env_memory 00:04:18.709 ************************************ 00:04:18.709 09:32:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:18.970 09:32:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:18.970 09:32:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.970 09:32:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.970 09:32:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.970 ************************************ 00:04:18.970 START TEST env_vtophys 00:04:18.970 ************************************ 00:04:18.970 09:32:46 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:18.970 EAL: lib.eal log level changed from notice to debug 00:04:18.970 EAL: Detected lcore 0 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 1 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 2 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 3 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 4 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 5 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 6 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 7 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 8 as core 0 on socket 0 00:04:18.970 EAL: Detected lcore 9 as core 0 on socket 0 00:04:18.970 EAL: Maximum logical cores by configuration: 128 00:04:18.970 EAL: Detected CPU lcores: 10 00:04:18.970 EAL: Detected NUMA nodes: 1 00:04:18.970 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:18.970 EAL: Detected shared linkage of DPDK 00:04:18.970 EAL: No shared files mode enabled, IPC will be disabled 00:04:18.970 EAL: Selected IOVA mode 'PA' 00:04:18.970 EAL: Probing VFIO support... 00:04:18.970 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:18.970 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:18.970 EAL: Ask a virtual area of 0x2e000 bytes 00:04:18.970 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:18.970 EAL: Setting up physically contiguous memory... 00:04:18.970 EAL: Setting maximum number of open files to 524288 00:04:18.970 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:18.970 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:18.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.970 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:18.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.970 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:18.970 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:18.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.970 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:18.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.970 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:18.970 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:18.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.970 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:18.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.970 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:18.970 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:18.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.970 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:18.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.970 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:18.970 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:18.970 EAL: Hugepages will be freed exactly as allocated. 00:04:18.970 EAL: No shared files mode enabled, IPC is disabled 00:04:18.970 EAL: No shared files mode enabled, IPC is disabled 00:04:18.970 EAL: TSC frequency is ~2600000 KHz 00:04:18.970 EAL: Main lcore 0 is ready (tid=7f55a5a33a40;cpuset=[0]) 00:04:18.970 EAL: Trying to obtain current memory policy. 00:04:18.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.970 EAL: Restoring previous memory policy: 0 00:04:18.970 EAL: request: mp_malloc_sync 00:04:18.970 EAL: No shared files mode enabled, IPC is disabled 00:04:18.970 EAL: Heap on socket 0 was expanded by 2MB 00:04:18.970 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:18.970 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:18.970 EAL: Mem event callback 'spdk:(nil)' registered 00:04:18.970 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:19.233 00:04:19.233 00:04:19.233 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.233 http://cunit.sourceforge.net/ 00:04:19.233 00:04:19.233 00:04:19.233 Suite: components_suite 00:04:19.494 Test: vtophys_malloc_test ...passed 00:04:19.494 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.494 EAL: Restoring previous memory policy: 4 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.494 EAL: Trying to obtain current memory policy. 00:04:19.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.494 EAL: Restoring previous memory policy: 4 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.494 EAL: Trying to obtain current memory policy. 00:04:19.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.494 EAL: Restoring previous memory policy: 4 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.494 EAL: Trying to obtain current memory policy. 00:04:19.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.494 EAL: Restoring previous memory policy: 4 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.494 EAL: Trying to obtain current memory policy. 00:04:19.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.494 EAL: Restoring previous memory policy: 4 00:04:19.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.494 EAL: request: mp_malloc_sync 00:04:19.494 EAL: No shared files mode enabled, IPC is disabled 00:04:19.494 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.764 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.764 EAL: request: mp_malloc_sync 00:04:19.764 EAL: No shared files mode enabled, IPC is disabled 00:04:19.764 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.764 EAL: Trying to obtain current memory policy. 00:04:19.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.764 EAL: Restoring previous memory policy: 4 00:04:19.764 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.764 EAL: request: mp_malloc_sync 00:04:19.764 EAL: No shared files mode enabled, IPC is disabled 00:04:19.764 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.764 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.764 EAL: request: mp_malloc_sync 00:04:19.764 EAL: No shared files mode enabled, IPC is disabled 00:04:19.764 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.764 EAL: Trying to obtain current memory policy. 00:04:19.764 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.764 EAL: Restoring previous memory policy: 4 00:04:19.764 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.764 EAL: request: mp_malloc_sync 00:04:19.764 EAL: No shared files mode enabled, IPC is disabled 00:04:19.764 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.071 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.071 EAL: request: mp_malloc_sync 00:04:20.071 EAL: No shared files mode enabled, IPC is disabled 00:04:20.071 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.071 EAL: Trying to obtain current memory policy. 00:04:20.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.071 EAL: Restoring previous memory policy: 4 00:04:20.071 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.071 EAL: request: mp_malloc_sync 00:04:20.071 EAL: No shared files mode enabled, IPC is disabled 00:04:20.071 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.644 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.644 EAL: request: mp_malloc_sync 00:04:20.644 EAL: No shared files mode enabled, IPC is disabled 00:04:20.644 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.905 EAL: Trying to obtain current memory policy. 00:04:20.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.905 EAL: Restoring previous memory policy: 4 00:04:20.905 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.905 EAL: request: mp_malloc_sync 00:04:20.906 EAL: No shared files mode enabled, IPC is disabled 00:04:20.906 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.478 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.478 EAL: request: mp_malloc_sync 00:04:21.478 EAL: No shared files mode enabled, IPC is disabled 00:04:21.478 EAL: Heap on socket 0 was shrunk by 514MB 00:04:22.058 EAL: Trying to obtain current memory policy. 00:04:22.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.323 EAL: Restoring previous memory policy: 4 00:04:22.323 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.323 EAL: request: mp_malloc_sync 00:04:22.323 EAL: No shared files mode enabled, IPC is disabled 00:04:22.323 EAL: Heap on socket 0 was expanded by 1026MB 00:04:23.708 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.708 EAL: request: mp_malloc_sync 00:04:23.708 EAL: No shared files mode enabled, IPC is disabled 00:04:23.708 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.649 passed 00:04:24.649 00:04:24.649 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.649 suites 1 1 n/a 0 0 00:04:24.649 tests 2 2 2 0 0 00:04:24.649 asserts 5642 5642 5642 0 n/a 00:04:24.649 00:04:24.650 Elapsed time = 5.262 seconds 00:04:24.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.650 EAL: request: mp_malloc_sync 00:04:24.650 EAL: No shared files mode enabled, IPC is disabled 00:04:24.650 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.650 EAL: No shared files mode enabled, IPC is disabled 00:04:24.650 EAL: No shared files mode enabled, IPC is disabled 00:04:24.650 EAL: No shared files mode enabled, IPC is disabled 00:04:24.650 00:04:24.650 real 0m5.551s 00:04:24.650 user 0m4.493s 00:04:24.650 sys 0m0.908s 00:04:24.650 09:32:51 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.650 ************************************ 00:04:24.650 END TEST env_vtophys 00:04:24.650 09:32:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 09:32:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.650 09:32:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.650 09:32:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.650 09:32:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 START TEST env_pci 00:04:24.650 ************************************ 00:04:24.650 09:32:52 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.650 00:04:24.650 00:04:24.650 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.650 http://cunit.sourceforge.net/ 00:04:24.650 00:04:24.650 00:04:24.650 Suite: pci 00:04:24.650 Test: pci_hook ...[2024-11-07 09:32:52.062204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57055 has claimed it 00:04:24.650 passed 00:04:24.650 00:04:24.650 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.650 suites 1 1 n/a 0 0 00:04:24.650 tests 1 1 1 0 0 00:04:24.650 asserts 25 25 25 0 n/a 00:04:24.650 00:04:24.650 Elapsed time = 0.004 seconds 00:04:24.650 EAL: Cannot find device (10000:00:01.0) 00:04:24.650 EAL: Failed to attach device on primary process 00:04:24.650 00:04:24.650 real 0m0.062s 00:04:24.650 user 0m0.030s 00:04:24.650 sys 0m0.031s 00:04:24.650 09:32:52 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.650 ************************************ 00:04:24.650 09:32:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 END TEST env_pci 00:04:24.650 ************************************ 00:04:24.650 09:32:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.650 09:32:52 env -- env/env.sh@15 -- # uname 00:04:24.650 09:32:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.650 09:32:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.650 09:32:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.650 09:32:52 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:24.650 09:32:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.650 09:32:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.650 ************************************ 00:04:24.650 START TEST env_dpdk_post_init 00:04:24.650 ************************************ 00:04:24.650 09:32:52 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.650 EAL: Detected CPU lcores: 10 00:04:24.650 EAL: Detected NUMA nodes: 1 00:04:24.650 EAL: Detected shared linkage of DPDK 00:04:24.650 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.650 EAL: Selected IOVA mode 'PA' 00:04:24.914 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.914 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:24.914 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:24.914 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:24.914 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:24.914 Starting DPDK initialization... 00:04:24.914 Starting SPDK post initialization... 00:04:24.914 SPDK NVMe probe 00:04:24.914 Attaching to 0000:00:10.0 00:04:24.914 Attaching to 0000:00:11.0 00:04:24.914 Attaching to 0000:00:12.0 00:04:24.914 Attaching to 0000:00:13.0 00:04:24.914 Attached to 0000:00:13.0 00:04:24.914 Attached to 0000:00:10.0 00:04:24.914 Attached to 0000:00:11.0 00:04:24.914 Attached to 0000:00:12.0 00:04:24.914 Cleaning up... 00:04:24.914 00:04:24.914 real 0m0.280s 00:04:24.914 user 0m0.084s 00:04:24.914 sys 0m0.097s 00:04:24.914 ************************************ 00:04:24.914 END TEST env_dpdk_post_init 00:04:24.914 ************************************ 00:04:24.914 09:32:52 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.914 09:32:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.914 09:32:52 env -- env/env.sh@26 -- # uname 00:04:24.914 09:32:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.914 09:32:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.914 09:32:52 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.914 09:32:52 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.914 09:32:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.914 ************************************ 00:04:24.914 START TEST env_mem_callbacks 00:04:24.914 ************************************ 00:04:24.914 09:32:52 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.914 EAL: Detected CPU lcores: 10 00:04:24.914 EAL: Detected NUMA nodes: 1 00:04:24.914 EAL: Detected shared linkage of DPDK 00:04:24.914 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.914 EAL: Selected IOVA mode 'PA' 00:04:25.193 00:04:25.193 00:04:25.193 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.193 http://cunit.sourceforge.net/ 00:04:25.193 00:04:25.193 00:04:25.193 Suite: memory 00:04:25.193 Test: test ... 00:04:25.193 register 0x200000200000 2097152 00:04:25.193 malloc 3145728 00:04:25.193 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.193 register 0x200000400000 4194304 00:04:25.193 buf 0x2000004fffc0 len 3145728 PASSED 00:04:25.193 malloc 64 00:04:25.193 buf 0x2000004ffec0 len 64 PASSED 00:04:25.193 malloc 4194304 00:04:25.193 register 0x200000800000 6291456 00:04:25.193 buf 0x2000009fffc0 len 4194304 PASSED 00:04:25.193 free 0x2000004fffc0 3145728 00:04:25.193 free 0x2000004ffec0 64 00:04:25.193 unregister 0x200000400000 4194304 PASSED 00:04:25.193 free 0x2000009fffc0 4194304 00:04:25.193 unregister 0x200000800000 6291456 PASSED 00:04:25.193 malloc 8388608 00:04:25.193 register 0x200000400000 10485760 00:04:25.193 buf 0x2000005fffc0 len 8388608 PASSED 00:04:25.193 free 0x2000005fffc0 8388608 00:04:25.193 unregister 0x200000400000 10485760 PASSED 00:04:25.193 passed 00:04:25.193 00:04:25.193 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.193 suites 1 1 n/a 0 0 00:04:25.193 tests 1 1 1 0 0 00:04:25.193 asserts 15 15 15 0 n/a 00:04:25.193 00:04:25.193 Elapsed time = 0.051 seconds 00:04:25.193 00:04:25.193 real 0m0.232s 00:04:25.193 user 0m0.062s 00:04:25.193 sys 0m0.067s 00:04:25.193 09:32:52 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.193 ************************************ 00:04:25.193 END TEST env_mem_callbacks 00:04:25.193 09:32:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.193 ************************************ 00:04:25.193 00:04:25.193 real 0m6.875s 00:04:25.193 user 0m5.095s 00:04:25.193 sys 0m1.335s 00:04:25.193 09:32:52 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.193 09:32:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.193 ************************************ 00:04:25.193 END TEST env 00:04:25.193 ************************************ 00:04:25.193 09:32:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.193 09:32:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.193 09:32:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.193 09:32:52 -- common/autotest_common.sh@10 -- # set +x 00:04:25.193 ************************************ 00:04:25.193 START TEST rpc 00:04:25.193 ************************************ 00:04:25.193 09:32:52 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.454 * Looking for test storage... 00:04:25.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.454 09:32:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.454 09:32:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.454 09:32:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.454 09:32:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.454 09:32:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.454 09:32:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.454 09:32:52 rpc -- scripts/common.sh@345 -- # : 1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.454 09:32:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.454 09:32:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.454 09:32:52 rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.454 09:32:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.454 09:32:52 rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.454 09:32:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.454 09:32:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.454 09:32:52 rpc -- scripts/common.sh@368 -- # return 0 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.454 --rc genhtml_branch_coverage=1 00:04:25.454 --rc genhtml_function_coverage=1 00:04:25.454 --rc genhtml_legend=1 00:04:25.454 --rc geninfo_all_blocks=1 00:04:25.454 --rc geninfo_unexecuted_blocks=1 00:04:25.454 00:04:25.454 ' 00:04:25.454 09:32:52 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.454 --rc genhtml_branch_coverage=1 00:04:25.454 --rc genhtml_function_coverage=1 00:04:25.454 --rc genhtml_legend=1 00:04:25.454 --rc geninfo_all_blocks=1 00:04:25.454 --rc geninfo_unexecuted_blocks=1 00:04:25.454 00:04:25.454 ' 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.454 --rc genhtml_branch_coverage=1 00:04:25.454 --rc genhtml_function_coverage=1 00:04:25.454 --rc genhtml_legend=1 00:04:25.454 --rc geninfo_all_blocks=1 00:04:25.454 --rc geninfo_unexecuted_blocks=1 00:04:25.454 00:04:25.454 ' 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.454 --rc genhtml_branch_coverage=1 00:04:25.454 --rc genhtml_function_coverage=1 00:04:25.454 --rc genhtml_legend=1 00:04:25.454 --rc geninfo_all_blocks=1 00:04:25.454 --rc geninfo_unexecuted_blocks=1 00:04:25.454 00:04:25.454 ' 00:04:25.454 09:32:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57176 00:04:25.454 09:32:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.454 09:32:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57176 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@833 -- # '[' -z 57176 ']' 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.454 09:32:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:25.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:25.454 09:32:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.454 [2024-11-07 09:32:53.110995] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:25.454 [2024-11-07 09:32:53.111176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57176 ] 00:04:25.715 [2024-11-07 09:32:53.279365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.975 [2024-11-07 09:32:53.425115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.975 [2024-11-07 09:32:53.425197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57176' to capture a snapshot of events at runtime. 00:04:25.975 [2024-11-07 09:32:53.425210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.975 [2024-11-07 09:32:53.425221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.975 [2024-11-07 09:32:53.425230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57176 for offline analysis/debug. 00:04:25.976 [2024-11-07 09:32:53.426268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.917 09:32:54 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:26.917 09:32:54 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:26.917 09:32:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.917 09:32:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.917 09:32:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:26.917 09:32:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:26.917 09:32:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.917 09:32:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.917 09:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 ************************************ 00:04:26.917 START TEST rpc_integrity 00:04:26.917 ************************************ 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.917 { 00:04:26.917 "name": "Malloc0", 00:04:26.917 "aliases": [ 00:04:26.917 "4afcc130-de32-4a87-8ae2-73155fbc46ac" 00:04:26.917 ], 00:04:26.917 "product_name": "Malloc disk", 00:04:26.917 "block_size": 512, 00:04:26.917 "num_blocks": 16384, 00:04:26.917 "uuid": "4afcc130-de32-4a87-8ae2-73155fbc46ac", 00:04:26.917 "assigned_rate_limits": { 00:04:26.917 "rw_ios_per_sec": 0, 00:04:26.917 "rw_mbytes_per_sec": 0, 00:04:26.917 "r_mbytes_per_sec": 0, 00:04:26.917 "w_mbytes_per_sec": 0 00:04:26.917 }, 00:04:26.917 "claimed": false, 00:04:26.917 "zoned": false, 00:04:26.917 "supported_io_types": { 00:04:26.917 "read": true, 00:04:26.917 "write": true, 00:04:26.917 "unmap": true, 00:04:26.917 "flush": true, 00:04:26.917 "reset": true, 00:04:26.917 "nvme_admin": false, 00:04:26.917 "nvme_io": false, 00:04:26.917 "nvme_io_md": false, 00:04:26.917 "write_zeroes": true, 00:04:26.917 "zcopy": true, 00:04:26.917 "get_zone_info": false, 00:04:26.917 "zone_management": false, 00:04:26.917 "zone_append": false, 00:04:26.917 "compare": false, 00:04:26.917 "compare_and_write": false, 00:04:26.917 "abort": true, 00:04:26.917 "seek_hole": false, 00:04:26.917 "seek_data": false, 00:04:26.917 "copy": true, 00:04:26.917 "nvme_iov_md": false 00:04:26.917 }, 00:04:26.917 "memory_domains": [ 00:04:26.917 { 00:04:26.917 "dma_device_id": "system", 00:04:26.917 "dma_device_type": 1 00:04:26.917 }, 00:04:26.917 { 00:04:26.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.917 "dma_device_type": 2 00:04:26.917 } 00:04:26.917 ], 00:04:26.917 "driver_specific": {} 00:04:26.917 } 00:04:26.917 ]' 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 [2024-11-07 09:32:54.372181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:26.917 [2024-11-07 09:32:54.372283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.917 [2024-11-07 09:32:54.372321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:26.917 [2024-11-07 09:32:54.372337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.917 [2024-11-07 09:32:54.375247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.917 [2024-11-07 09:32:54.375318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.917 Passthru0 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.917 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.917 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.917 { 00:04:26.917 "name": "Malloc0", 00:04:26.917 "aliases": [ 00:04:26.917 "4afcc130-de32-4a87-8ae2-73155fbc46ac" 00:04:26.917 ], 00:04:26.917 "product_name": "Malloc disk", 00:04:26.917 "block_size": 512, 00:04:26.917 "num_blocks": 16384, 00:04:26.917 "uuid": "4afcc130-de32-4a87-8ae2-73155fbc46ac", 00:04:26.917 "assigned_rate_limits": { 00:04:26.917 "rw_ios_per_sec": 0, 00:04:26.917 "rw_mbytes_per_sec": 0, 00:04:26.917 "r_mbytes_per_sec": 0, 00:04:26.917 "w_mbytes_per_sec": 0 00:04:26.917 }, 00:04:26.917 "claimed": true, 00:04:26.917 "claim_type": "exclusive_write", 00:04:26.917 "zoned": false, 00:04:26.917 "supported_io_types": { 00:04:26.917 "read": true, 00:04:26.917 "write": true, 00:04:26.917 "unmap": true, 00:04:26.917 "flush": true, 00:04:26.917 "reset": true, 00:04:26.917 "nvme_admin": false, 00:04:26.917 "nvme_io": false, 00:04:26.917 "nvme_io_md": false, 00:04:26.917 "write_zeroes": true, 00:04:26.917 "zcopy": true, 00:04:26.917 "get_zone_info": false, 00:04:26.917 "zone_management": false, 00:04:26.917 "zone_append": false, 00:04:26.917 "compare": false, 00:04:26.917 "compare_and_write": false, 00:04:26.917 "abort": true, 00:04:26.917 "seek_hole": false, 00:04:26.917 "seek_data": false, 00:04:26.917 "copy": true, 00:04:26.917 "nvme_iov_md": false 00:04:26.917 }, 00:04:26.917 "memory_domains": [ 00:04:26.917 { 00:04:26.917 "dma_device_id": "system", 00:04:26.917 "dma_device_type": 1 00:04:26.917 }, 00:04:26.917 { 00:04:26.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.917 "dma_device_type": 2 00:04:26.917 } 00:04:26.917 ], 00:04:26.917 "driver_specific": {} 00:04:26.917 }, 00:04:26.917 { 00:04:26.917 "name": "Passthru0", 00:04:26.917 "aliases": [ 00:04:26.917 "7db23d3f-9aaf-5b7e-a2ad-e7d667cd0b76" 00:04:26.917 ], 00:04:26.917 "product_name": "passthru", 00:04:26.917 "block_size": 512, 00:04:26.917 "num_blocks": 16384, 00:04:26.917 "uuid": "7db23d3f-9aaf-5b7e-a2ad-e7d667cd0b76", 00:04:26.917 "assigned_rate_limits": { 00:04:26.917 "rw_ios_per_sec": 0, 00:04:26.917 "rw_mbytes_per_sec": 0, 00:04:26.917 "r_mbytes_per_sec": 0, 00:04:26.918 "w_mbytes_per_sec": 0 00:04:26.918 }, 00:04:26.918 "claimed": false, 00:04:26.918 "zoned": false, 00:04:26.918 "supported_io_types": { 00:04:26.918 "read": true, 00:04:26.918 "write": true, 00:04:26.918 "unmap": true, 00:04:26.918 "flush": true, 00:04:26.918 "reset": true, 00:04:26.918 "nvme_admin": false, 00:04:26.918 "nvme_io": false, 00:04:26.918 "nvme_io_md": false, 00:04:26.918 "write_zeroes": true, 00:04:26.918 "zcopy": true, 00:04:26.918 "get_zone_info": false, 00:04:26.918 "zone_management": false, 00:04:26.918 "zone_append": false, 00:04:26.918 "compare": false, 00:04:26.918 "compare_and_write": false, 00:04:26.918 "abort": true, 00:04:26.918 "seek_hole": false, 00:04:26.918 "seek_data": false, 00:04:26.918 "copy": true, 00:04:26.918 "nvme_iov_md": false 00:04:26.918 }, 00:04:26.918 "memory_domains": [ 00:04:26.918 { 00:04:26.918 "dma_device_id": "system", 00:04:26.918 "dma_device_type": 1 00:04:26.918 }, 00:04:26.918 { 00:04:26.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.918 "dma_device_type": 2 00:04:26.918 } 00:04:26.918 ], 00:04:26.918 "driver_specific": { 00:04:26.918 "passthru": { 00:04:26.918 "name": "Passthru0", 00:04:26.918 "base_bdev_name": "Malloc0" 00:04:26.918 } 00:04:26.918 } 00:04:26.918 } 00:04:26.918 ]' 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.918 09:32:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.918 00:04:26.918 real 0m0.275s 00:04:26.918 user 0m0.136s 00:04:26.918 sys 0m0.042s 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.918 09:32:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.918 ************************************ 00:04:26.918 END TEST rpc_integrity 00:04:26.918 ************************************ 00:04:26.918 09:32:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.918 09:32:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.918 09:32:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.918 09:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 ************************************ 00:04:27.180 START TEST rpc_plugins 00:04:27.180 ************************************ 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.180 { 00:04:27.180 "name": "Malloc1", 00:04:27.180 "aliases": [ 00:04:27.180 "2bb28d77-33a4-4822-b0a7-f12369198ae4" 00:04:27.180 ], 00:04:27.180 "product_name": "Malloc disk", 00:04:27.180 "block_size": 4096, 00:04:27.180 "num_blocks": 256, 00:04:27.180 "uuid": "2bb28d77-33a4-4822-b0a7-f12369198ae4", 00:04:27.180 "assigned_rate_limits": { 00:04:27.180 "rw_ios_per_sec": 0, 00:04:27.180 "rw_mbytes_per_sec": 0, 00:04:27.180 "r_mbytes_per_sec": 0, 00:04:27.180 "w_mbytes_per_sec": 0 00:04:27.180 }, 00:04:27.180 "claimed": false, 00:04:27.180 "zoned": false, 00:04:27.180 "supported_io_types": { 00:04:27.180 "read": true, 00:04:27.180 "write": true, 00:04:27.180 "unmap": true, 00:04:27.180 "flush": true, 00:04:27.180 "reset": true, 00:04:27.180 "nvme_admin": false, 00:04:27.180 "nvme_io": false, 00:04:27.180 "nvme_io_md": false, 00:04:27.180 "write_zeroes": true, 00:04:27.180 "zcopy": true, 00:04:27.180 "get_zone_info": false, 00:04:27.180 "zone_management": false, 00:04:27.180 "zone_append": false, 00:04:27.180 "compare": false, 00:04:27.180 "compare_and_write": false, 00:04:27.180 "abort": true, 00:04:27.180 "seek_hole": false, 00:04:27.180 "seek_data": false, 00:04:27.180 "copy": true, 00:04:27.180 "nvme_iov_md": false 00:04:27.180 }, 00:04:27.180 "memory_domains": [ 00:04:27.180 { 00:04:27.180 "dma_device_id": "system", 00:04:27.180 "dma_device_type": 1 00:04:27.180 }, 00:04:27.180 { 00:04:27.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.180 "dma_device_type": 2 00:04:27.180 } 00:04:27.180 ], 00:04:27.180 "driver_specific": {} 00:04:27.180 } 00:04:27.180 ]' 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.180 09:32:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.180 00:04:27.180 real 0m0.126s 00:04:27.180 user 0m0.070s 00:04:27.180 sys 0m0.017s 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.180 ************************************ 00:04:27.180 END TEST rpc_plugins 00:04:27.180 ************************************ 00:04:27.180 09:32:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 09:32:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.180 09:32:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.180 09:32:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.180 09:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 ************************************ 00:04:27.180 START TEST rpc_trace_cmd_test 00:04:27.180 ************************************ 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.180 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57176", 00:04:27.180 "tpoint_group_mask": "0x8", 00:04:27.180 "iscsi_conn": { 00:04:27.180 "mask": "0x2", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "scsi": { 00:04:27.180 "mask": "0x4", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "bdev": { 00:04:27.180 "mask": "0x8", 00:04:27.180 "tpoint_mask": "0xffffffffffffffff" 00:04:27.180 }, 00:04:27.180 "nvmf_rdma": { 00:04:27.180 "mask": "0x10", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "nvmf_tcp": { 00:04:27.180 "mask": "0x20", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "ftl": { 00:04:27.180 "mask": "0x40", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "blobfs": { 00:04:27.180 "mask": "0x80", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "dsa": { 00:04:27.180 "mask": "0x200", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "thread": { 00:04:27.180 "mask": "0x400", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "nvme_pcie": { 00:04:27.180 "mask": "0x800", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "iaa": { 00:04:27.180 "mask": "0x1000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "nvme_tcp": { 00:04:27.180 "mask": "0x2000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "bdev_nvme": { 00:04:27.180 "mask": "0x4000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "sock": { 00:04:27.180 "mask": "0x8000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "blob": { 00:04:27.180 "mask": "0x10000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "bdev_raid": { 00:04:27.180 "mask": "0x20000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 }, 00:04:27.180 "scheduler": { 00:04:27.180 "mask": "0x40000", 00:04:27.180 "tpoint_mask": "0x0" 00:04:27.180 } 00:04:27.180 }' 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.180 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.442 00:04:27.442 real 0m0.182s 00:04:27.442 user 0m0.144s 00:04:27.442 sys 0m0.027s 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.442 09:32:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.442 ************************************ 00:04:27.442 END TEST rpc_trace_cmd_test 00:04:27.442 ************************************ 00:04:27.442 09:32:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.442 09:32:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.442 09:32:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.442 09:32:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.442 09:32:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.442 09:32:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.442 ************************************ 00:04:27.442 START TEST rpc_daemon_integrity 00:04:27.442 ************************************ 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.442 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.704 { 00:04:27.704 "name": "Malloc2", 00:04:27.704 "aliases": [ 00:04:27.704 "09f1a3e6-2af9-4a02-81a6-863b13815de1" 00:04:27.704 ], 00:04:27.704 "product_name": "Malloc disk", 00:04:27.704 "block_size": 512, 00:04:27.704 "num_blocks": 16384, 00:04:27.704 "uuid": "09f1a3e6-2af9-4a02-81a6-863b13815de1", 00:04:27.704 "assigned_rate_limits": { 00:04:27.704 "rw_ios_per_sec": 0, 00:04:27.704 "rw_mbytes_per_sec": 0, 00:04:27.704 "r_mbytes_per_sec": 0, 00:04:27.704 "w_mbytes_per_sec": 0 00:04:27.704 }, 00:04:27.704 "claimed": false, 00:04:27.704 "zoned": false, 00:04:27.704 "supported_io_types": { 00:04:27.704 "read": true, 00:04:27.704 "write": true, 00:04:27.704 "unmap": true, 00:04:27.704 "flush": true, 00:04:27.704 "reset": true, 00:04:27.704 "nvme_admin": false, 00:04:27.704 "nvme_io": false, 00:04:27.704 "nvme_io_md": false, 00:04:27.704 "write_zeroes": true, 00:04:27.704 "zcopy": true, 00:04:27.704 "get_zone_info": false, 00:04:27.704 "zone_management": false, 00:04:27.704 "zone_append": false, 00:04:27.704 "compare": false, 00:04:27.704 "compare_and_write": false, 00:04:27.704 "abort": true, 00:04:27.704 "seek_hole": false, 00:04:27.704 "seek_data": false, 00:04:27.704 "copy": true, 00:04:27.704 "nvme_iov_md": false 00:04:27.704 }, 00:04:27.704 "memory_domains": [ 00:04:27.704 { 00:04:27.704 "dma_device_id": "system", 00:04:27.704 "dma_device_type": 1 00:04:27.704 }, 00:04:27.704 { 00:04:27.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.704 "dma_device_type": 2 00:04:27.704 } 00:04:27.704 ], 00:04:27.704 "driver_specific": {} 00:04:27.704 } 00:04:27.704 ]' 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.704 [2024-11-07 09:32:55.155187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.704 [2024-11-07 09:32:55.155287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.704 [2024-11-07 09:32:55.155321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:27.704 [2024-11-07 09:32:55.155336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.704 [2024-11-07 09:32:55.158284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.704 [2024-11-07 09:32:55.158349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.704 Passthru0 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.704 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.704 { 00:04:27.704 "name": "Malloc2", 00:04:27.704 "aliases": [ 00:04:27.704 "09f1a3e6-2af9-4a02-81a6-863b13815de1" 00:04:27.704 ], 00:04:27.704 "product_name": "Malloc disk", 00:04:27.704 "block_size": 512, 00:04:27.704 "num_blocks": 16384, 00:04:27.704 "uuid": "09f1a3e6-2af9-4a02-81a6-863b13815de1", 00:04:27.704 "assigned_rate_limits": { 00:04:27.704 "rw_ios_per_sec": 0, 00:04:27.704 "rw_mbytes_per_sec": 0, 00:04:27.704 "r_mbytes_per_sec": 0, 00:04:27.704 "w_mbytes_per_sec": 0 00:04:27.704 }, 00:04:27.704 "claimed": true, 00:04:27.704 "claim_type": "exclusive_write", 00:04:27.704 "zoned": false, 00:04:27.704 "supported_io_types": { 00:04:27.704 "read": true, 00:04:27.704 "write": true, 00:04:27.704 "unmap": true, 00:04:27.704 "flush": true, 00:04:27.704 "reset": true, 00:04:27.704 "nvme_admin": false, 00:04:27.704 "nvme_io": false, 00:04:27.704 "nvme_io_md": false, 00:04:27.704 "write_zeroes": true, 00:04:27.704 "zcopy": true, 00:04:27.704 "get_zone_info": false, 00:04:27.704 "zone_management": false, 00:04:27.704 "zone_append": false, 00:04:27.704 "compare": false, 00:04:27.704 "compare_and_write": false, 00:04:27.704 "abort": true, 00:04:27.704 "seek_hole": false, 00:04:27.704 "seek_data": false, 00:04:27.704 "copy": true, 00:04:27.704 "nvme_iov_md": false 00:04:27.704 }, 00:04:27.704 "memory_domains": [ 00:04:27.704 { 00:04:27.704 "dma_device_id": "system", 00:04:27.704 "dma_device_type": 1 00:04:27.704 }, 00:04:27.704 { 00:04:27.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.704 "dma_device_type": 2 00:04:27.704 } 00:04:27.704 ], 00:04:27.704 "driver_specific": {} 00:04:27.704 }, 00:04:27.704 { 00:04:27.704 "name": "Passthru0", 00:04:27.704 "aliases": [ 00:04:27.704 "093665b5-bb38-5716-b329-5df9fe953063" 00:04:27.704 ], 00:04:27.704 "product_name": "passthru", 00:04:27.705 "block_size": 512, 00:04:27.705 "num_blocks": 16384, 00:04:27.705 "uuid": "093665b5-bb38-5716-b329-5df9fe953063", 00:04:27.705 "assigned_rate_limits": { 00:04:27.705 "rw_ios_per_sec": 0, 00:04:27.705 "rw_mbytes_per_sec": 0, 00:04:27.705 "r_mbytes_per_sec": 0, 00:04:27.705 "w_mbytes_per_sec": 0 00:04:27.705 }, 00:04:27.705 "claimed": false, 00:04:27.705 "zoned": false, 00:04:27.705 "supported_io_types": { 00:04:27.705 "read": true, 00:04:27.705 "write": true, 00:04:27.705 "unmap": true, 00:04:27.705 "flush": true, 00:04:27.705 "reset": true, 00:04:27.705 "nvme_admin": false, 00:04:27.705 "nvme_io": false, 00:04:27.705 "nvme_io_md": false, 00:04:27.705 "write_zeroes": true, 00:04:27.705 "zcopy": true, 00:04:27.705 "get_zone_info": false, 00:04:27.705 "zone_management": false, 00:04:27.705 "zone_append": false, 00:04:27.705 "compare": false, 00:04:27.705 "compare_and_write": false, 00:04:27.705 "abort": true, 00:04:27.705 "seek_hole": false, 00:04:27.705 "seek_data": false, 00:04:27.705 "copy": true, 00:04:27.705 "nvme_iov_md": false 00:04:27.705 }, 00:04:27.705 "memory_domains": [ 00:04:27.705 { 00:04:27.705 "dma_device_id": "system", 00:04:27.705 "dma_device_type": 1 00:04:27.705 }, 00:04:27.705 { 00:04:27.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.705 "dma_device_type": 2 00:04:27.705 } 00:04:27.705 ], 00:04:27.705 "driver_specific": { 00:04:27.705 "passthru": { 00:04:27.705 "name": "Passthru0", 00:04:27.705 "base_bdev_name": "Malloc2" 00:04:27.705 } 00:04:27.705 } 00:04:27.705 } 00:04:27.705 ]' 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.705 00:04:27.705 real 0m0.253s 00:04:27.705 user 0m0.120s 00:04:27.705 sys 0m0.042s 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.705 ************************************ 00:04:27.705 END TEST rpc_daemon_integrity 00:04:27.705 ************************************ 00:04:27.705 09:32:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.705 09:32:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:27.705 09:32:55 rpc -- rpc/rpc.sh@84 -- # killprocess 57176 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@952 -- # '[' -z 57176 ']' 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@956 -- # kill -0 57176 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@957 -- # uname 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57176 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:27.705 killing process with pid 57176 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57176' 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@971 -- # kill 57176 00:04:27.705 09:32:55 rpc -- common/autotest_common.sh@976 -- # wait 57176 00:04:29.622 00:04:29.622 real 0m4.101s 00:04:29.622 user 0m4.311s 00:04:29.622 sys 0m0.904s 00:04:29.622 09:32:56 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.622 09:32:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.622 ************************************ 00:04:29.622 END TEST rpc 00:04:29.622 ************************************ 00:04:29.622 09:32:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:29.622 09:32:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.622 09:32:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.622 09:32:56 -- common/autotest_common.sh@10 -- # set +x 00:04:29.622 ************************************ 00:04:29.622 START TEST skip_rpc 00:04:29.622 ************************************ 00:04:29.622 09:32:56 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:29.622 * Looking for test storage... 00:04:29.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.622 09:32:57 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.622 09:32:57 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.622 09:32:57 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.622 09:32:57 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.622 09:32:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.622 09:32:57 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.622 09:32:57 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.623 --rc genhtml_branch_coverage=1 00:04:29.623 --rc genhtml_function_coverage=1 00:04:29.623 --rc genhtml_legend=1 00:04:29.623 --rc geninfo_all_blocks=1 00:04:29.623 --rc geninfo_unexecuted_blocks=1 00:04:29.623 00:04:29.623 ' 00:04:29.623 09:32:57 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.623 --rc genhtml_branch_coverage=1 00:04:29.623 --rc genhtml_function_coverage=1 00:04:29.623 --rc genhtml_legend=1 00:04:29.623 --rc geninfo_all_blocks=1 00:04:29.623 --rc geninfo_unexecuted_blocks=1 00:04:29.623 00:04:29.623 ' 00:04:29.623 09:32:57 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.623 --rc genhtml_branch_coverage=1 00:04:29.623 --rc genhtml_function_coverage=1 00:04:29.623 --rc genhtml_legend=1 00:04:29.623 --rc geninfo_all_blocks=1 00:04:29.623 --rc geninfo_unexecuted_blocks=1 00:04:29.623 00:04:29.623 ' 00:04:29.623 09:32:57 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.623 --rc genhtml_branch_coverage=1 00:04:29.623 --rc genhtml_function_coverage=1 00:04:29.623 --rc genhtml_legend=1 00:04:29.623 --rc geninfo_all_blocks=1 00:04:29.623 --rc geninfo_unexecuted_blocks=1 00:04:29.623 00:04:29.623 ' 00:04:29.623 09:32:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.623 09:32:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.623 09:32:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.623 09:32:57 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.623 09:32:57 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.623 09:32:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.623 ************************************ 00:04:29.623 START TEST skip_rpc 00:04:29.623 ************************************ 00:04:29.623 09:32:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:29.623 09:32:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57400 00:04:29.623 09:32:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.623 09:32:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.623 09:32:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.623 [2024-11-07 09:32:57.209425] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:29.623 [2024-11-07 09:32:57.209555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57400 ] 00:04:29.883 [2024-11-07 09:32:57.369027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.883 [2024-11-07 09:32:57.459649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57400 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57400 ']' 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57400 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57400 00:04:35.181 killing process with pid 57400 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57400' 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57400 00:04:35.181 09:33:02 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57400 00:04:35.747 00:04:35.747 real 0m6.276s 00:04:35.747 user 0m5.854s 00:04:35.747 sys 0m0.321s 00:04:35.747 ************************************ 00:04:35.747 END TEST skip_rpc 00:04:35.747 ************************************ 00:04:35.747 09:33:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.747 09:33:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.006 09:33:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:36.006 09:33:03 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.006 09:33:03 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.006 09:33:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.006 ************************************ 00:04:36.006 START TEST skip_rpc_with_json 00:04:36.006 ************************************ 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:36.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57493 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57493 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57493 ']' 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.006 09:33:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.006 [2024-11-07 09:33:03.520901] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:36.006 [2024-11-07 09:33:03.521023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57493 ] 00:04:36.264 [2024-11-07 09:33:03.677425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.264 [2024-11-07 09:33:03.770667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.830 [2024-11-07 09:33:04.361658] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.830 request: 00:04:36.830 { 00:04:36.830 "trtype": "tcp", 00:04:36.830 "method": "nvmf_get_transports", 00:04:36.830 "req_id": 1 00:04:36.830 } 00:04:36.830 Got JSON-RPC error response 00:04:36.830 response: 00:04:36.830 { 00:04:36.830 "code": -19, 00:04:36.830 "message": "No such device" 00:04:36.830 } 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.830 [2024-11-07 09:33:04.369747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.830 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.088 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.088 09:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.088 { 00:04:37.088 "subsystems": [ 00:04:37.088 { 00:04:37.088 "subsystem": "fsdev", 00:04:37.088 "config": [ 00:04:37.088 { 00:04:37.088 "method": "fsdev_set_opts", 00:04:37.088 "params": { 00:04:37.088 "fsdev_io_pool_size": 65535, 00:04:37.088 "fsdev_io_cache_size": 256 00:04:37.088 } 00:04:37.088 } 00:04:37.088 ] 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "subsystem": "keyring", 00:04:37.088 "config": [] 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "subsystem": "iobuf", 00:04:37.088 "config": [ 00:04:37.088 { 00:04:37.088 "method": "iobuf_set_options", 00:04:37.088 "params": { 00:04:37.088 "small_pool_count": 8192, 00:04:37.088 "large_pool_count": 1024, 00:04:37.088 "small_bufsize": 8192, 00:04:37.088 "large_bufsize": 135168, 00:04:37.088 "enable_numa": false 00:04:37.088 } 00:04:37.088 } 00:04:37.088 ] 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "subsystem": "sock", 00:04:37.088 "config": [ 00:04:37.088 { 00:04:37.088 "method": "sock_set_default_impl", 00:04:37.088 "params": { 00:04:37.088 "impl_name": "posix" 00:04:37.088 } 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "method": "sock_impl_set_options", 00:04:37.088 "params": { 00:04:37.088 "impl_name": "ssl", 00:04:37.088 "recv_buf_size": 4096, 00:04:37.088 "send_buf_size": 4096, 00:04:37.088 "enable_recv_pipe": true, 00:04:37.088 "enable_quickack": false, 00:04:37.088 "enable_placement_id": 0, 00:04:37.088 "enable_zerocopy_send_server": true, 00:04:37.088 "enable_zerocopy_send_client": false, 00:04:37.088 "zerocopy_threshold": 0, 00:04:37.088 "tls_version": 0, 00:04:37.088 "enable_ktls": false 00:04:37.088 } 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "method": "sock_impl_set_options", 00:04:37.088 "params": { 00:04:37.088 "impl_name": "posix", 00:04:37.088 "recv_buf_size": 2097152, 00:04:37.088 "send_buf_size": 2097152, 00:04:37.088 "enable_recv_pipe": true, 00:04:37.088 "enable_quickack": false, 00:04:37.088 "enable_placement_id": 0, 00:04:37.088 "enable_zerocopy_send_server": true, 00:04:37.088 "enable_zerocopy_send_client": false, 00:04:37.088 "zerocopy_threshold": 0, 00:04:37.088 "tls_version": 0, 00:04:37.088 "enable_ktls": false 00:04:37.088 } 00:04:37.088 } 00:04:37.088 ] 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "subsystem": "vmd", 00:04:37.088 "config": [] 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "subsystem": "accel", 00:04:37.088 "config": [ 00:04:37.088 { 00:04:37.088 "method": "accel_set_options", 00:04:37.088 "params": { 00:04:37.088 "small_cache_size": 128, 00:04:37.088 "large_cache_size": 16, 00:04:37.088 "task_count": 2048, 00:04:37.088 "sequence_count": 2048, 00:04:37.088 "buf_count": 2048 00:04:37.088 } 00:04:37.088 } 00:04:37.088 ] 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "subsystem": "bdev", 00:04:37.088 "config": [ 00:04:37.088 { 00:04:37.088 "method": "bdev_set_options", 00:04:37.088 "params": { 00:04:37.088 "bdev_io_pool_size": 65535, 00:04:37.088 "bdev_io_cache_size": 256, 00:04:37.088 "bdev_auto_examine": true, 00:04:37.088 "iobuf_small_cache_size": 128, 00:04:37.088 "iobuf_large_cache_size": 16 00:04:37.088 } 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "method": "bdev_raid_set_options", 00:04:37.088 "params": { 00:04:37.088 "process_window_size_kb": 1024, 00:04:37.088 "process_max_bandwidth_mb_sec": 0 00:04:37.088 } 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "method": "bdev_iscsi_set_options", 00:04:37.088 "params": { 00:04:37.088 "timeout_sec": 30 00:04:37.088 } 00:04:37.088 }, 00:04:37.088 { 00:04:37.088 "method": "bdev_nvme_set_options", 00:04:37.088 "params": { 00:04:37.088 "action_on_timeout": "none", 00:04:37.088 "timeout_us": 0, 00:04:37.088 "timeout_admin_us": 0, 00:04:37.088 "keep_alive_timeout_ms": 10000, 00:04:37.088 "arbitration_burst": 0, 00:04:37.088 "low_priority_weight": 0, 00:04:37.088 "medium_priority_weight": 0, 00:04:37.088 "high_priority_weight": 0, 00:04:37.088 "nvme_adminq_poll_period_us": 10000, 00:04:37.088 "nvme_ioq_poll_period_us": 0, 00:04:37.088 "io_queue_requests": 0, 00:04:37.088 "delay_cmd_submit": true, 00:04:37.088 "transport_retry_count": 4, 00:04:37.088 "bdev_retry_count": 3, 00:04:37.088 "transport_ack_timeout": 0, 00:04:37.088 "ctrlr_loss_timeout_sec": 0, 00:04:37.088 "reconnect_delay_sec": 0, 00:04:37.088 "fast_io_fail_timeout_sec": 0, 00:04:37.088 "disable_auto_failback": false, 00:04:37.088 "generate_uuids": false, 00:04:37.088 "transport_tos": 0, 00:04:37.088 "nvme_error_stat": false, 00:04:37.088 "rdma_srq_size": 0, 00:04:37.088 "io_path_stat": false, 00:04:37.089 "allow_accel_sequence": false, 00:04:37.089 "rdma_max_cq_size": 0, 00:04:37.089 "rdma_cm_event_timeout_ms": 0, 00:04:37.089 "dhchap_digests": [ 00:04:37.089 "sha256", 00:04:37.089 "sha384", 00:04:37.089 "sha512" 00:04:37.089 ], 00:04:37.089 "dhchap_dhgroups": [ 00:04:37.089 "null", 00:04:37.089 "ffdhe2048", 00:04:37.089 "ffdhe3072", 00:04:37.089 "ffdhe4096", 00:04:37.089 "ffdhe6144", 00:04:37.089 "ffdhe8192" 00:04:37.089 ] 00:04:37.089 } 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "method": "bdev_nvme_set_hotplug", 00:04:37.089 "params": { 00:04:37.089 "period_us": 100000, 00:04:37.089 "enable": false 00:04:37.089 } 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "method": "bdev_wait_for_examine" 00:04:37.089 } 00:04:37.089 ] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "scsi", 00:04:37.089 "config": null 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "scheduler", 00:04:37.089 "config": [ 00:04:37.089 { 00:04:37.089 "method": "framework_set_scheduler", 00:04:37.089 "params": { 00:04:37.089 "name": "static" 00:04:37.089 } 00:04:37.089 } 00:04:37.089 ] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "vhost_scsi", 00:04:37.089 "config": [] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "vhost_blk", 00:04:37.089 "config": [] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "ublk", 00:04:37.089 "config": [] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "nbd", 00:04:37.089 "config": [] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "nvmf", 00:04:37.089 "config": [ 00:04:37.089 { 00:04:37.089 "method": "nvmf_set_config", 00:04:37.089 "params": { 00:04:37.089 "discovery_filter": "match_any", 00:04:37.089 "admin_cmd_passthru": { 00:04:37.089 "identify_ctrlr": false 00:04:37.089 }, 00:04:37.089 "dhchap_digests": [ 00:04:37.089 "sha256", 00:04:37.089 "sha384", 00:04:37.089 "sha512" 00:04:37.089 ], 00:04:37.089 "dhchap_dhgroups": [ 00:04:37.089 "null", 00:04:37.089 "ffdhe2048", 00:04:37.089 "ffdhe3072", 00:04:37.089 "ffdhe4096", 00:04:37.089 "ffdhe6144", 00:04:37.089 "ffdhe8192" 00:04:37.089 ] 00:04:37.089 } 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "method": "nvmf_set_max_subsystems", 00:04:37.089 "params": { 00:04:37.089 "max_subsystems": 1024 00:04:37.089 } 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "method": "nvmf_set_crdt", 00:04:37.089 "params": { 00:04:37.089 "crdt1": 0, 00:04:37.089 "crdt2": 0, 00:04:37.089 "crdt3": 0 00:04:37.089 } 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "method": "nvmf_create_transport", 00:04:37.089 "params": { 00:04:37.089 "trtype": "TCP", 00:04:37.089 "max_queue_depth": 128, 00:04:37.089 "max_io_qpairs_per_ctrlr": 127, 00:04:37.089 "in_capsule_data_size": 4096, 00:04:37.089 "max_io_size": 131072, 00:04:37.089 "io_unit_size": 131072, 00:04:37.089 "max_aq_depth": 128, 00:04:37.089 "num_shared_buffers": 511, 00:04:37.089 "buf_cache_size": 4294967295, 00:04:37.089 "dif_insert_or_strip": false, 00:04:37.089 "zcopy": false, 00:04:37.089 "c2h_success": true, 00:04:37.089 "sock_priority": 0, 00:04:37.089 "abort_timeout_sec": 1, 00:04:37.089 "ack_timeout": 0, 00:04:37.089 "data_wr_pool_size": 0 00:04:37.089 } 00:04:37.089 } 00:04:37.089 ] 00:04:37.089 }, 00:04:37.089 { 00:04:37.089 "subsystem": "iscsi", 00:04:37.089 "config": [ 00:04:37.089 { 00:04:37.089 "method": "iscsi_set_options", 00:04:37.089 "params": { 00:04:37.089 "node_base": "iqn.2016-06.io.spdk", 00:04:37.089 "max_sessions": 128, 00:04:37.089 "max_connections_per_session": 2, 00:04:37.089 "max_queue_depth": 64, 00:04:37.089 "default_time2wait": 2, 00:04:37.089 "default_time2retain": 20, 00:04:37.089 "first_burst_length": 8192, 00:04:37.089 "immediate_data": true, 00:04:37.089 "allow_duplicated_isid": false, 00:04:37.089 "error_recovery_level": 0, 00:04:37.089 "nop_timeout": 60, 00:04:37.089 "nop_in_interval": 30, 00:04:37.089 "disable_chap": false, 00:04:37.089 "require_chap": false, 00:04:37.089 "mutual_chap": false, 00:04:37.089 "chap_group": 0, 00:04:37.089 "max_large_datain_per_connection": 64, 00:04:37.089 "max_r2t_per_connection": 4, 00:04:37.089 "pdu_pool_size": 36864, 00:04:37.089 "immediate_data_pool_size": 16384, 00:04:37.089 "data_out_pool_size": 2048 00:04:37.089 } 00:04:37.089 } 00:04:37.089 ] 00:04:37.089 } 00:04:37.089 ] 00:04:37.089 } 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57493 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57493 ']' 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57493 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57493 00:04:37.089 killing process with pid 57493 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57493' 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57493 00:04:37.089 09:33:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57493 00:04:38.461 09:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57532 00:04:38.461 09:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.461 09:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57532 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57532 ']' 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57532 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57532 00:04:43.724 killing process with pid 57532 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57532' 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57532 00:04:43.724 09:33:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57532 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:44.659 00:04:44.659 real 0m8.632s 00:04:44.659 user 0m8.164s 00:04:44.659 sys 0m0.698s 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.659 ************************************ 00:04:44.659 END TEST skip_rpc_with_json 00:04:44.659 ************************************ 00:04:44.659 09:33:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:44.659 09:33:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.659 09:33:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.659 09:33:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.659 ************************************ 00:04:44.659 START TEST skip_rpc_with_delay 00:04:44.659 ************************************ 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.659 [2024-11-07 09:33:12.197675] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.659 00:04:44.659 real 0m0.125s 00:04:44.659 user 0m0.061s 00:04:44.659 sys 0m0.062s 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.659 ************************************ 00:04:44.659 END TEST skip_rpc_with_delay 00:04:44.659 ************************************ 00:04:44.659 09:33:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:44.659 09:33:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:44.659 09:33:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:44.659 09:33:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:44.659 09:33:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.659 09:33:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.659 09:33:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.659 ************************************ 00:04:44.659 START TEST exit_on_failed_rpc_init 00:04:44.659 ************************************ 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57649 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57649 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57649 ']' 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.659 09:33:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.918 [2024-11-07 09:33:12.353151] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:44.918 [2024-11-07 09:33:12.353261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57649 ] 00:04:44.918 [2024-11-07 09:33:12.505465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.176 [2024-11-07 09:33:12.596059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:45.743 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.743 [2024-11-07 09:33:13.271985] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:45.743 [2024-11-07 09:33:13.272289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57667 ] 00:04:46.001 [2024-11-07 09:33:13.434487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.001 [2024-11-07 09:33:13.545744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.001 [2024-11-07 09:33:13.545819] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:46.001 [2024-11-07 09:33:13.545833] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:46.001 [2024-11-07 09:33:13.545846] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57649 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57649 ']' 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57649 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57649 00:04:46.260 killing process with pid 57649 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57649' 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57649 00:04:46.260 09:33:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57649 00:04:47.648 00:04:47.648 real 0m2.712s 00:04:47.648 user 0m2.995s 00:04:47.648 sys 0m0.449s 00:04:47.648 09:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.648 ************************************ 00:04:47.648 END TEST exit_on_failed_rpc_init 00:04:47.648 ************************************ 00:04:47.648 09:33:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.648 09:33:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.648 ************************************ 00:04:47.648 END TEST skip_rpc 00:04:47.648 ************************************ 00:04:47.648 00:04:47.648 real 0m18.062s 00:04:47.648 user 0m17.208s 00:04:47.648 sys 0m1.704s 00:04:47.648 09:33:15 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.648 09:33:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.648 09:33:15 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.648 09:33:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.648 09:33:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.648 09:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.648 ************************************ 00:04:47.648 START TEST rpc_client 00:04:47.648 ************************************ 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.648 * Looking for test storage... 00:04:47.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.648 09:33:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.648 09:33:15 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.648 --rc genhtml_branch_coverage=1 00:04:47.648 --rc genhtml_function_coverage=1 00:04:47.649 --rc genhtml_legend=1 00:04:47.649 --rc geninfo_all_blocks=1 00:04:47.649 --rc geninfo_unexecuted_blocks=1 00:04:47.649 00:04:47.649 ' 00:04:47.649 09:33:15 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.649 --rc genhtml_branch_coverage=1 00:04:47.649 --rc genhtml_function_coverage=1 00:04:47.649 --rc genhtml_legend=1 00:04:47.649 --rc geninfo_all_blocks=1 00:04:47.649 --rc geninfo_unexecuted_blocks=1 00:04:47.649 00:04:47.649 ' 00:04:47.649 09:33:15 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.649 --rc genhtml_branch_coverage=1 00:04:47.649 --rc genhtml_function_coverage=1 00:04:47.649 --rc genhtml_legend=1 00:04:47.649 --rc geninfo_all_blocks=1 00:04:47.649 --rc geninfo_unexecuted_blocks=1 00:04:47.649 00:04:47.649 ' 00:04:47.649 09:33:15 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.649 --rc genhtml_branch_coverage=1 00:04:47.649 --rc genhtml_function_coverage=1 00:04:47.649 --rc genhtml_legend=1 00:04:47.649 --rc geninfo_all_blocks=1 00:04:47.649 --rc geninfo_unexecuted_blocks=1 00:04:47.649 00:04:47.649 ' 00:04:47.649 09:33:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:47.649 OK 00:04:47.649 09:33:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.649 00:04:47.649 real 0m0.201s 00:04:47.649 user 0m0.113s 00:04:47.649 sys 0m0.094s 00:04:47.649 09:33:15 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.649 09:33:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.649 ************************************ 00:04:47.649 END TEST rpc_client 00:04:47.649 ************************************ 00:04:47.910 09:33:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.910 09:33:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.910 09:33:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.910 09:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.910 ************************************ 00:04:47.910 START TEST json_config 00:04:47.910 ************************************ 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.910 09:33:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.910 09:33:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.910 09:33:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.910 09:33:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.910 09:33:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.910 09:33:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:47.910 09:33:15 json_config -- scripts/common.sh@345 -- # : 1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.910 09:33:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.910 09:33:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@353 -- # local d=1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.910 09:33:15 json_config -- scripts/common.sh@355 -- # echo 1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.910 09:33:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@353 -- # local d=2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.910 09:33:15 json_config -- scripts/common.sh@355 -- # echo 2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.910 09:33:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.910 09:33:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.910 09:33:15 json_config -- scripts/common.sh@368 -- # return 0 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.910 --rc genhtml_branch_coverage=1 00:04:47.910 --rc genhtml_function_coverage=1 00:04:47.910 --rc genhtml_legend=1 00:04:47.910 --rc geninfo_all_blocks=1 00:04:47.910 --rc geninfo_unexecuted_blocks=1 00:04:47.910 00:04:47.910 ' 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.910 --rc genhtml_branch_coverage=1 00:04:47.910 --rc genhtml_function_coverage=1 00:04:47.910 --rc genhtml_legend=1 00:04:47.910 --rc geninfo_all_blocks=1 00:04:47.910 --rc geninfo_unexecuted_blocks=1 00:04:47.910 00:04:47.910 ' 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.910 --rc genhtml_branch_coverage=1 00:04:47.910 --rc genhtml_function_coverage=1 00:04:47.910 --rc genhtml_legend=1 00:04:47.910 --rc geninfo_all_blocks=1 00:04:47.910 --rc geninfo_unexecuted_blocks=1 00:04:47.910 00:04:47.910 ' 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.910 --rc genhtml_branch_coverage=1 00:04:47.910 --rc genhtml_function_coverage=1 00:04:47.910 --rc genhtml_legend=1 00:04:47.910 --rc geninfo_all_blocks=1 00:04:47.910 --rc geninfo_unexecuted_blocks=1 00:04:47.910 00:04:47.910 ' 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ad18ec5b-c807-48e1-8f0a-2ea67531be3c 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ad18ec5b-c807-48e1-8f0a-2ea67531be3c 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.910 09:33:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.910 09:33:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.910 09:33:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.910 09:33:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.910 09:33:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.910 09:33:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.910 09:33:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.910 09:33:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.910 09:33:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@51 -- # : 0 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.910 09:33:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:47.910 WARNING: No tests are enabled so not running JSON configuration tests 00:04:47.910 09:33:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:47.910 00:04:47.910 real 0m0.150s 00:04:47.910 user 0m0.096s 00:04:47.910 sys 0m0.053s 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.910 ************************************ 00:04:47.910 END TEST json_config 00:04:47.910 ************************************ 00:04:47.910 09:33:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.911 09:33:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.911 09:33:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.911 09:33:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.911 09:33:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.911 ************************************ 00:04:47.911 START TEST json_config_extra_key 00:04:47.911 ************************************ 00:04:47.911 09:33:15 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.172 --rc genhtml_branch_coverage=1 00:04:48.172 --rc genhtml_function_coverage=1 00:04:48.172 --rc genhtml_legend=1 00:04:48.172 --rc geninfo_all_blocks=1 00:04:48.172 --rc geninfo_unexecuted_blocks=1 00:04:48.172 00:04:48.172 ' 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.172 --rc genhtml_branch_coverage=1 00:04:48.172 --rc genhtml_function_coverage=1 00:04:48.172 --rc genhtml_legend=1 00:04:48.172 --rc geninfo_all_blocks=1 00:04:48.172 --rc geninfo_unexecuted_blocks=1 00:04:48.172 00:04:48.172 ' 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.172 --rc genhtml_branch_coverage=1 00:04:48.172 --rc genhtml_function_coverage=1 00:04:48.172 --rc genhtml_legend=1 00:04:48.172 --rc geninfo_all_blocks=1 00:04:48.172 --rc geninfo_unexecuted_blocks=1 00:04:48.172 00:04:48.172 ' 00:04:48.172 09:33:15 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.172 --rc genhtml_branch_coverage=1 00:04:48.172 --rc genhtml_function_coverage=1 00:04:48.172 --rc genhtml_legend=1 00:04:48.172 --rc geninfo_all_blocks=1 00:04:48.172 --rc geninfo_unexecuted_blocks=1 00:04:48.172 00:04:48.172 ' 00:04:48.172 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ad18ec5b-c807-48e1-8f0a-2ea67531be3c 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ad18ec5b-c807-48e1-8f0a-2ea67531be3c 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.172 09:33:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.172 09:33:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.173 09:33:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.173 09:33:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.173 09:33:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.173 09:33:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:48.173 09:33:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.173 09:33:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:48.173 INFO: launching applications... 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:48.173 09:33:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.173 Waiting for target to run... 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57861 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57861 /var/tmp/spdk_tgt.sock 00:04:48.173 09:33:15 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57861 ']' 00:04:48.173 09:33:15 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.173 09:33:15 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.173 09:33:15 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.173 09:33:15 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.173 09:33:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.173 09:33:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.173 [2024-11-07 09:33:15.814719] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:48.173 [2024-11-07 09:33:15.814911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57861 ] 00:04:48.744 [2024-11-07 09:33:16.353733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.003 [2024-11-07 09:33:16.495415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.571 09:33:17 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.571 09:33:17 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:49.571 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:49.571 INFO: shutting down applications... 00:04:49.571 09:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:49.571 09:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57861 ]] 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57861 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:49.571 09:33:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.139 09:33:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.139 09:33:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.139 09:33:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:50.139 09:33:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.723 09:33:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.723 09:33:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.723 09:33:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:50.723 09:33:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.005 09:33:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.006 09:33:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.006 09:33:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:51.006 09:33:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57861 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.573 SPDK target shutdown done 00:04:51.573 09:33:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.573 Success 00:04:51.573 09:33:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.573 00:04:51.573 real 0m3.560s 00:04:51.573 user 0m3.092s 00:04:51.573 sys 0m0.662s 00:04:51.573 09:33:19 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.573 09:33:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.573 ************************************ 00:04:51.573 END TEST json_config_extra_key 00:04:51.573 ************************************ 00:04:51.573 09:33:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.573 09:33:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.573 09:33:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.573 09:33:19 -- common/autotest_common.sh@10 -- # set +x 00:04:51.573 ************************************ 00:04:51.573 START TEST alias_rpc 00:04:51.573 ************************************ 00:04:51.573 09:33:19 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.573 * Looking for test storage... 00:04:51.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:51.573 09:33:19 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.573 09:33:19 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.573 09:33:19 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.834 09:33:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.834 --rc genhtml_branch_coverage=1 00:04:51.834 --rc genhtml_function_coverage=1 00:04:51.834 --rc genhtml_legend=1 00:04:51.834 --rc geninfo_all_blocks=1 00:04:51.834 --rc geninfo_unexecuted_blocks=1 00:04:51.834 00:04:51.834 ' 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.834 --rc genhtml_branch_coverage=1 00:04:51.834 --rc genhtml_function_coverage=1 00:04:51.834 --rc genhtml_legend=1 00:04:51.834 --rc geninfo_all_blocks=1 00:04:51.834 --rc geninfo_unexecuted_blocks=1 00:04:51.834 00:04:51.834 ' 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.834 --rc genhtml_branch_coverage=1 00:04:51.834 --rc genhtml_function_coverage=1 00:04:51.834 --rc genhtml_legend=1 00:04:51.834 --rc geninfo_all_blocks=1 00:04:51.834 --rc geninfo_unexecuted_blocks=1 00:04:51.834 00:04:51.834 ' 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.834 --rc genhtml_branch_coverage=1 00:04:51.834 --rc genhtml_function_coverage=1 00:04:51.834 --rc genhtml_legend=1 00:04:51.834 --rc geninfo_all_blocks=1 00:04:51.834 --rc geninfo_unexecuted_blocks=1 00:04:51.834 00:04:51.834 ' 00:04:51.834 09:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.834 09:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57959 00:04:51.834 09:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57959 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57959 ']' 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.834 09:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.834 09:33:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.834 [2024-11-07 09:33:19.374507] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:51.834 [2024-11-07 09:33:19.374664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57959 ] 00:04:52.104 [2024-11-07 09:33:19.533766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.104 [2024-11-07 09:33:19.637995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.674 09:33:20 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.674 09:33:20 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:52.674 09:33:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:52.932 09:33:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57959 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57959 ']' 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57959 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57959 00:04:52.932 killing process with pid 57959 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57959' 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@971 -- # kill 57959 00:04:52.932 09:33:20 alias_rpc -- common/autotest_common.sh@976 -- # wait 57959 00:04:54.312 ************************************ 00:04:54.312 END TEST alias_rpc 00:04:54.312 ************************************ 00:04:54.312 00:04:54.312 real 0m2.586s 00:04:54.312 user 0m2.642s 00:04:54.312 sys 0m0.461s 00:04:54.312 09:33:21 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.312 09:33:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.312 09:33:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.312 09:33:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.312 09:33:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.312 09:33:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.312 09:33:21 -- common/autotest_common.sh@10 -- # set +x 00:04:54.312 ************************************ 00:04:54.312 START TEST spdkcli_tcp 00:04:54.312 ************************************ 00:04:54.312 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.312 * Looking for test storage... 00:04:54.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.312 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.312 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.312 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.312 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.312 09:33:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.313 09:33:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.313 09:33:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.313 09:33:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.313 09:33:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.313 09:33:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.313 09:33:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.313 --rc genhtml_branch_coverage=1 00:04:54.313 --rc genhtml_function_coverage=1 00:04:54.313 --rc genhtml_legend=1 00:04:54.313 --rc geninfo_all_blocks=1 00:04:54.313 --rc geninfo_unexecuted_blocks=1 00:04:54.313 00:04:54.313 ' 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.313 --rc genhtml_branch_coverage=1 00:04:54.313 --rc genhtml_function_coverage=1 00:04:54.313 --rc genhtml_legend=1 00:04:54.313 --rc geninfo_all_blocks=1 00:04:54.313 --rc geninfo_unexecuted_blocks=1 00:04:54.313 00:04:54.313 ' 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.313 --rc genhtml_branch_coverage=1 00:04:54.313 --rc genhtml_function_coverage=1 00:04:54.313 --rc genhtml_legend=1 00:04:54.313 --rc geninfo_all_blocks=1 00:04:54.313 --rc geninfo_unexecuted_blocks=1 00:04:54.313 00:04:54.313 ' 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.313 --rc genhtml_branch_coverage=1 00:04:54.313 --rc genhtml_function_coverage=1 00:04:54.313 --rc genhtml_legend=1 00:04:54.313 --rc geninfo_all_blocks=1 00:04:54.313 --rc geninfo_unexecuted_blocks=1 00:04:54.313 00:04:54.313 ' 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58050 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58050 00:04:54.313 09:33:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58050 ']' 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.313 09:33:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.572 [2024-11-07 09:33:22.002210] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:54.572 [2024-11-07 09:33:22.002903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58050 ] 00:04:54.572 [2024-11-07 09:33:22.161672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.829 [2024-11-07 09:33:22.263431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.829 [2024-11-07 09:33:22.263444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.395 09:33:22 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.395 09:33:22 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:55.395 09:33:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.395 09:33:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58067 00:04:55.395 09:33:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.395 [ 00:04:55.395 "bdev_malloc_delete", 00:04:55.395 "bdev_malloc_create", 00:04:55.395 "bdev_null_resize", 00:04:55.395 "bdev_null_delete", 00:04:55.395 "bdev_null_create", 00:04:55.395 "bdev_nvme_cuse_unregister", 00:04:55.395 "bdev_nvme_cuse_register", 00:04:55.395 "bdev_opal_new_user", 00:04:55.395 "bdev_opal_set_lock_state", 00:04:55.395 "bdev_opal_delete", 00:04:55.395 "bdev_opal_get_info", 00:04:55.395 "bdev_opal_create", 00:04:55.395 "bdev_nvme_opal_revert", 00:04:55.395 "bdev_nvme_opal_init", 00:04:55.395 "bdev_nvme_send_cmd", 00:04:55.395 "bdev_nvme_set_keys", 00:04:55.395 "bdev_nvme_get_path_iostat", 00:04:55.395 "bdev_nvme_get_mdns_discovery_info", 00:04:55.395 "bdev_nvme_stop_mdns_discovery", 00:04:55.395 "bdev_nvme_start_mdns_discovery", 00:04:55.395 "bdev_nvme_set_multipath_policy", 00:04:55.395 "bdev_nvme_set_preferred_path", 00:04:55.395 "bdev_nvme_get_io_paths", 00:04:55.395 "bdev_nvme_remove_error_injection", 00:04:55.395 "bdev_nvme_add_error_injection", 00:04:55.395 "bdev_nvme_get_discovery_info", 00:04:55.395 "bdev_nvme_stop_discovery", 00:04:55.395 "bdev_nvme_start_discovery", 00:04:55.395 "bdev_nvme_get_controller_health_info", 00:04:55.395 "bdev_nvme_disable_controller", 00:04:55.395 "bdev_nvme_enable_controller", 00:04:55.395 "bdev_nvme_reset_controller", 00:04:55.395 "bdev_nvme_get_transport_statistics", 00:04:55.395 "bdev_nvme_apply_firmware", 00:04:55.395 "bdev_nvme_detach_controller", 00:04:55.395 "bdev_nvme_get_controllers", 00:04:55.395 "bdev_nvme_attach_controller", 00:04:55.395 "bdev_nvme_set_hotplug", 00:04:55.395 "bdev_nvme_set_options", 00:04:55.395 "bdev_passthru_delete", 00:04:55.395 "bdev_passthru_create", 00:04:55.395 "bdev_lvol_set_parent_bdev", 00:04:55.395 "bdev_lvol_set_parent", 00:04:55.395 "bdev_lvol_check_shallow_copy", 00:04:55.395 "bdev_lvol_start_shallow_copy", 00:04:55.395 "bdev_lvol_grow_lvstore", 00:04:55.395 "bdev_lvol_get_lvols", 00:04:55.395 "bdev_lvol_get_lvstores", 00:04:55.395 "bdev_lvol_delete", 00:04:55.395 "bdev_lvol_set_read_only", 00:04:55.395 "bdev_lvol_resize", 00:04:55.395 "bdev_lvol_decouple_parent", 00:04:55.395 "bdev_lvol_inflate", 00:04:55.395 "bdev_lvol_rename", 00:04:55.395 "bdev_lvol_clone_bdev", 00:04:55.395 "bdev_lvol_clone", 00:04:55.395 "bdev_lvol_snapshot", 00:04:55.395 "bdev_lvol_create", 00:04:55.395 "bdev_lvol_delete_lvstore", 00:04:55.395 "bdev_lvol_rename_lvstore", 00:04:55.395 "bdev_lvol_create_lvstore", 00:04:55.395 "bdev_raid_set_options", 00:04:55.395 "bdev_raid_remove_base_bdev", 00:04:55.395 "bdev_raid_add_base_bdev", 00:04:55.395 "bdev_raid_delete", 00:04:55.395 "bdev_raid_create", 00:04:55.395 "bdev_raid_get_bdevs", 00:04:55.395 "bdev_error_inject_error", 00:04:55.395 "bdev_error_delete", 00:04:55.395 "bdev_error_create", 00:04:55.395 "bdev_split_delete", 00:04:55.395 "bdev_split_create", 00:04:55.395 "bdev_delay_delete", 00:04:55.395 "bdev_delay_create", 00:04:55.395 "bdev_delay_update_latency", 00:04:55.395 "bdev_zone_block_delete", 00:04:55.395 "bdev_zone_block_create", 00:04:55.395 "blobfs_create", 00:04:55.395 "blobfs_detect", 00:04:55.395 "blobfs_set_cache_size", 00:04:55.395 "bdev_xnvme_delete", 00:04:55.395 "bdev_xnvme_create", 00:04:55.395 "bdev_aio_delete", 00:04:55.395 "bdev_aio_rescan", 00:04:55.395 "bdev_aio_create", 00:04:55.395 "bdev_ftl_set_property", 00:04:55.395 "bdev_ftl_get_properties", 00:04:55.395 "bdev_ftl_get_stats", 00:04:55.395 "bdev_ftl_unmap", 00:04:55.395 "bdev_ftl_unload", 00:04:55.395 "bdev_ftl_delete", 00:04:55.395 "bdev_ftl_load", 00:04:55.395 "bdev_ftl_create", 00:04:55.395 "bdev_virtio_attach_controller", 00:04:55.395 "bdev_virtio_scsi_get_devices", 00:04:55.395 "bdev_virtio_detach_controller", 00:04:55.395 "bdev_virtio_blk_set_hotplug", 00:04:55.395 "bdev_iscsi_delete", 00:04:55.395 "bdev_iscsi_create", 00:04:55.395 "bdev_iscsi_set_options", 00:04:55.395 "accel_error_inject_error", 00:04:55.395 "ioat_scan_accel_module", 00:04:55.395 "dsa_scan_accel_module", 00:04:55.395 "iaa_scan_accel_module", 00:04:55.395 "keyring_file_remove_key", 00:04:55.395 "keyring_file_add_key", 00:04:55.395 "keyring_linux_set_options", 00:04:55.395 "fsdev_aio_delete", 00:04:55.395 "fsdev_aio_create", 00:04:55.395 "iscsi_get_histogram", 00:04:55.395 "iscsi_enable_histogram", 00:04:55.395 "iscsi_set_options", 00:04:55.395 "iscsi_get_auth_groups", 00:04:55.395 "iscsi_auth_group_remove_secret", 00:04:55.395 "iscsi_auth_group_add_secret", 00:04:55.395 "iscsi_delete_auth_group", 00:04:55.395 "iscsi_create_auth_group", 00:04:55.395 "iscsi_set_discovery_auth", 00:04:55.395 "iscsi_get_options", 00:04:55.395 "iscsi_target_node_request_logout", 00:04:55.395 "iscsi_target_node_set_redirect", 00:04:55.395 "iscsi_target_node_set_auth", 00:04:55.395 "iscsi_target_node_add_lun", 00:04:55.395 "iscsi_get_stats", 00:04:55.395 "iscsi_get_connections", 00:04:55.395 "iscsi_portal_group_set_auth", 00:04:55.395 "iscsi_start_portal_group", 00:04:55.395 "iscsi_delete_portal_group", 00:04:55.395 "iscsi_create_portal_group", 00:04:55.395 "iscsi_get_portal_groups", 00:04:55.395 "iscsi_delete_target_node", 00:04:55.395 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.395 "iscsi_target_node_add_pg_ig_maps", 00:04:55.395 "iscsi_create_target_node", 00:04:55.395 "iscsi_get_target_nodes", 00:04:55.395 "iscsi_delete_initiator_group", 00:04:55.395 "iscsi_initiator_group_remove_initiators", 00:04:55.395 "iscsi_initiator_group_add_initiators", 00:04:55.395 "iscsi_create_initiator_group", 00:04:55.395 "iscsi_get_initiator_groups", 00:04:55.395 "nvmf_set_crdt", 00:04:55.395 "nvmf_set_config", 00:04:55.395 "nvmf_set_max_subsystems", 00:04:55.395 "nvmf_stop_mdns_prr", 00:04:55.395 "nvmf_publish_mdns_prr", 00:04:55.395 "nvmf_subsystem_get_listeners", 00:04:55.395 "nvmf_subsystem_get_qpairs", 00:04:55.395 "nvmf_subsystem_get_controllers", 00:04:55.395 "nvmf_get_stats", 00:04:55.395 "nvmf_get_transports", 00:04:55.395 "nvmf_create_transport", 00:04:55.395 "nvmf_get_targets", 00:04:55.395 "nvmf_delete_target", 00:04:55.395 "nvmf_create_target", 00:04:55.395 "nvmf_subsystem_allow_any_host", 00:04:55.395 "nvmf_subsystem_set_keys", 00:04:55.395 "nvmf_subsystem_remove_host", 00:04:55.395 "nvmf_subsystem_add_host", 00:04:55.395 "nvmf_ns_remove_host", 00:04:55.395 "nvmf_ns_add_host", 00:04:55.395 "nvmf_subsystem_remove_ns", 00:04:55.395 "nvmf_subsystem_set_ns_ana_group", 00:04:55.395 "nvmf_subsystem_add_ns", 00:04:55.395 "nvmf_subsystem_listener_set_ana_state", 00:04:55.395 "nvmf_discovery_get_referrals", 00:04:55.395 "nvmf_discovery_remove_referral", 00:04:55.395 "nvmf_discovery_add_referral", 00:04:55.395 "nvmf_subsystem_remove_listener", 00:04:55.395 "nvmf_subsystem_add_listener", 00:04:55.395 "nvmf_delete_subsystem", 00:04:55.395 "nvmf_create_subsystem", 00:04:55.395 "nvmf_get_subsystems", 00:04:55.395 "env_dpdk_get_mem_stats", 00:04:55.395 "nbd_get_disks", 00:04:55.395 "nbd_stop_disk", 00:04:55.395 "nbd_start_disk", 00:04:55.395 "ublk_recover_disk", 00:04:55.396 "ublk_get_disks", 00:04:55.396 "ublk_stop_disk", 00:04:55.396 "ublk_start_disk", 00:04:55.396 "ublk_destroy_target", 00:04:55.396 "ublk_create_target", 00:04:55.396 "virtio_blk_create_transport", 00:04:55.396 "virtio_blk_get_transports", 00:04:55.396 "vhost_controller_set_coalescing", 00:04:55.396 "vhost_get_controllers", 00:04:55.396 "vhost_delete_controller", 00:04:55.396 "vhost_create_blk_controller", 00:04:55.396 "vhost_scsi_controller_remove_target", 00:04:55.396 "vhost_scsi_controller_add_target", 00:04:55.396 "vhost_start_scsi_controller", 00:04:55.396 "vhost_create_scsi_controller", 00:04:55.396 "thread_set_cpumask", 00:04:55.396 "scheduler_set_options", 00:04:55.396 "framework_get_governor", 00:04:55.396 "framework_get_scheduler", 00:04:55.396 "framework_set_scheduler", 00:04:55.396 "framework_get_reactors", 00:04:55.396 "thread_get_io_channels", 00:04:55.396 "thread_get_pollers", 00:04:55.396 "thread_get_stats", 00:04:55.396 "framework_monitor_context_switch", 00:04:55.396 "spdk_kill_instance", 00:04:55.396 "log_enable_timestamps", 00:04:55.396 "log_get_flags", 00:04:55.396 "log_clear_flag", 00:04:55.396 "log_set_flag", 00:04:55.396 "log_get_level", 00:04:55.396 "log_set_level", 00:04:55.396 "log_get_print_level", 00:04:55.396 "log_set_print_level", 00:04:55.396 "framework_enable_cpumask_locks", 00:04:55.396 "framework_disable_cpumask_locks", 00:04:55.396 "framework_wait_init", 00:04:55.396 "framework_start_init", 00:04:55.396 "scsi_get_devices", 00:04:55.396 "bdev_get_histogram", 00:04:55.396 "bdev_enable_histogram", 00:04:55.396 "bdev_set_qos_limit", 00:04:55.396 "bdev_set_qd_sampling_period", 00:04:55.396 "bdev_get_bdevs", 00:04:55.396 "bdev_reset_iostat", 00:04:55.396 "bdev_get_iostat", 00:04:55.396 "bdev_examine", 00:04:55.396 "bdev_wait_for_examine", 00:04:55.396 "bdev_set_options", 00:04:55.396 "accel_get_stats", 00:04:55.396 "accel_set_options", 00:04:55.396 "accel_set_driver", 00:04:55.396 "accel_crypto_key_destroy", 00:04:55.396 "accel_crypto_keys_get", 00:04:55.396 "accel_crypto_key_create", 00:04:55.396 "accel_assign_opc", 00:04:55.396 "accel_get_module_info", 00:04:55.396 "accel_get_opc_assignments", 00:04:55.396 "vmd_rescan", 00:04:55.396 "vmd_remove_device", 00:04:55.396 "vmd_enable", 00:04:55.396 "sock_get_default_impl", 00:04:55.396 "sock_set_default_impl", 00:04:55.396 "sock_impl_set_options", 00:04:55.396 "sock_impl_get_options", 00:04:55.396 "iobuf_get_stats", 00:04:55.396 "iobuf_set_options", 00:04:55.396 "keyring_get_keys", 00:04:55.396 "framework_get_pci_devices", 00:04:55.396 "framework_get_config", 00:04:55.396 "framework_get_subsystems", 00:04:55.396 "fsdev_set_opts", 00:04:55.396 "fsdev_get_opts", 00:04:55.396 "trace_get_info", 00:04:55.396 "trace_get_tpoint_group_mask", 00:04:55.396 "trace_disable_tpoint_group", 00:04:55.396 "trace_enable_tpoint_group", 00:04:55.396 "trace_clear_tpoint_mask", 00:04:55.396 "trace_set_tpoint_mask", 00:04:55.396 "notify_get_notifications", 00:04:55.396 "notify_get_types", 00:04:55.396 "spdk_get_version", 00:04:55.396 "rpc_get_methods" 00:04:55.396 ] 00:04:55.396 09:33:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.396 09:33:23 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.396 09:33:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.653 09:33:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.653 09:33:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58050 00:04:55.653 09:33:23 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58050 ']' 00:04:55.653 09:33:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58050 00:04:55.653 09:33:23 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:55.653 09:33:23 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.654 09:33:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58050 00:04:55.654 09:33:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.654 09:33:23 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.654 09:33:23 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58050' 00:04:55.654 killing process with pid 58050 00:04:55.654 09:33:23 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58050 00:04:55.654 09:33:23 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58050 00:04:57.058 ************************************ 00:04:57.058 END TEST spdkcli_tcp 00:04:57.058 ************************************ 00:04:57.058 00:04:57.058 real 0m2.559s 00:04:57.058 user 0m4.503s 00:04:57.058 sys 0m0.508s 00:04:57.058 09:33:24 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.058 09:33:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.058 09:33:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.058 09:33:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.058 09:33:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.058 09:33:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.058 ************************************ 00:04:57.058 START TEST dpdk_mem_utility 00:04:57.058 ************************************ 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.058 * Looking for test storage... 00:04:57.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.058 09:33:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.058 --rc genhtml_branch_coverage=1 00:04:57.058 --rc genhtml_function_coverage=1 00:04:57.058 --rc genhtml_legend=1 00:04:57.058 --rc geninfo_all_blocks=1 00:04:57.058 --rc geninfo_unexecuted_blocks=1 00:04:57.058 00:04:57.058 ' 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.058 --rc genhtml_branch_coverage=1 00:04:57.058 --rc genhtml_function_coverage=1 00:04:57.058 --rc genhtml_legend=1 00:04:57.058 --rc geninfo_all_blocks=1 00:04:57.058 --rc geninfo_unexecuted_blocks=1 00:04:57.058 00:04:57.058 ' 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.058 --rc genhtml_branch_coverage=1 00:04:57.058 --rc genhtml_function_coverage=1 00:04:57.058 --rc genhtml_legend=1 00:04:57.058 --rc geninfo_all_blocks=1 00:04:57.058 --rc geninfo_unexecuted_blocks=1 00:04:57.058 00:04:57.058 ' 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.058 --rc genhtml_branch_coverage=1 00:04:57.058 --rc genhtml_function_coverage=1 00:04:57.058 --rc genhtml_legend=1 00:04:57.058 --rc geninfo_all_blocks=1 00:04:57.058 --rc geninfo_unexecuted_blocks=1 00:04:57.058 00:04:57.058 ' 00:04:57.058 09:33:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.058 09:33:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58155 00:04:57.058 09:33:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58155 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58155 ']' 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.058 09:33:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.058 09:33:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.058 [2024-11-07 09:33:24.592323] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:57.058 [2024-11-07 09:33:24.592445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58155 ] 00:04:57.319 [2024-11-07 09:33:24.746399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.319 [2024-11-07 09:33:24.844804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.892 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.892 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:57.892 09:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.892 09:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.892 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.892 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 { 00:04:57.892 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.892 } 00:04:57.892 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.892 09:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.892 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:57.892 1 heaps totaling size 816.000000 MiB 00:04:57.892 size: 816.000000 MiB heap id: 0 00:04:57.892 end heaps---------- 00:04:57.892 9 mempools totaling size 595.772034 MiB 00:04:57.892 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.892 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.892 size: 92.545471 MiB name: bdev_io_58155 00:04:57.892 size: 50.003479 MiB name: msgpool_58155 00:04:57.892 size: 36.509338 MiB name: fsdev_io_58155 00:04:57.892 size: 21.763794 MiB name: PDU_Pool 00:04:57.892 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.892 size: 4.133484 MiB name: evtpool_58155 00:04:57.892 size: 0.026123 MiB name: Session_Pool 00:04:57.892 end mempools------- 00:04:57.892 6 memzones totaling size 4.142822 MiB 00:04:57.892 size: 1.000366 MiB name: RG_ring_0_58155 00:04:57.892 size: 1.000366 MiB name: RG_ring_1_58155 00:04:57.892 size: 1.000366 MiB name: RG_ring_4_58155 00:04:57.892 size: 1.000366 MiB name: RG_ring_5_58155 00:04:57.892 size: 0.125366 MiB name: RG_ring_2_58155 00:04:57.892 size: 0.015991 MiB name: RG_ring_3_58155 00:04:57.892 end memzones------- 00:04:57.892 09:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.892 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:04:57.892 list of free elements. size: 16.792847 MiB 00:04:57.892 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:57.892 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:57.892 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:57.892 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:57.892 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:57.892 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:57.892 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:57.892 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:57.892 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:57.892 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:57.892 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:57.892 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:04:57.892 element at address: 0x200000c00000 with size: 0.491638 MiB 00:04:57.892 element at address: 0x200018e00000 with size: 0.488220 MiB 00:04:57.892 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:57.892 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:57.892 element at address: 0x200028000000 with size: 0.390930 MiB 00:04:57.892 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:57.892 list of standard malloc elements. size: 199.286255 MiB 00:04:57.892 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:57.892 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:57.892 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:57.892 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:57.892 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:57.892 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:57.892 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:57.892 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:57.892 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:57.892 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:57.892 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:57.892 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:57.892 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:57.893 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:57.894 element at address: 0x200028064140 with size: 0.000244 MiB 00:04:57.894 element at address: 0x200028064240 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806af00 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:57.894 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:57.894 list of memzone associated elements. size: 599.920898 MiB 00:04:57.894 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:57.894 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.894 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:57.894 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.894 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:57.894 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58155_0 00:04:57.894 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:57.894 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58155_0 00:04:57.894 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:57.894 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58155_0 00:04:57.894 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:57.894 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.894 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:57.894 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.894 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:57.894 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58155_0 00:04:57.894 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:57.894 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58155 00:04:57.894 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:57.894 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58155 00:04:57.894 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:57.894 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.894 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:57.894 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.894 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:57.894 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.894 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:57.894 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.894 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:57.894 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58155 00:04:57.894 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:57.894 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58155 00:04:57.894 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:57.894 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58155 00:04:57.894 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:57.895 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58155 00:04:57.895 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:57.895 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58155 00:04:57.895 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:57.895 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58155 00:04:57.895 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:57.895 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.895 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:57.895 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.895 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:57.895 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.895 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:57.895 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58155 00:04:57.895 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:57.895 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58155 00:04:57.895 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:57.895 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.895 element at address: 0x200028064340 with size: 0.023804 MiB 00:04:57.895 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.895 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:57.895 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58155 00:04:57.895 element at address: 0x20002806a4c0 with size: 0.002502 MiB 00:04:57.895 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.895 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:57.895 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58155 00:04:57.895 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:57.895 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58155 00:04:57.895 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:57.895 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58155 00:04:57.895 element at address: 0x20002806b000 with size: 0.000366 MiB 00:04:57.895 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.895 09:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.895 09:33:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58155 00:04:57.895 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58155 ']' 00:04:57.895 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58155 00:04:57.895 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:57.895 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.895 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58155 00:04:58.155 killing process with pid 58155 00:04:58.155 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.155 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.155 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58155' 00:04:58.155 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58155 00:04:58.155 09:33:25 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58155 00:04:59.540 ************************************ 00:04:59.540 END TEST dpdk_mem_utility 00:04:59.540 ************************************ 00:04:59.540 00:04:59.540 real 0m2.601s 00:04:59.540 user 0m2.559s 00:04:59.540 sys 0m0.458s 00:04:59.540 09:33:26 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.540 09:33:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.540 09:33:27 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.540 09:33:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.540 09:33:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.540 09:33:27 -- common/autotest_common.sh@10 -- # set +x 00:04:59.540 ************************************ 00:04:59.540 START TEST event 00:04:59.540 ************************************ 00:04:59.540 09:33:27 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.540 * Looking for test storage... 00:04:59.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.540 09:33:27 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.540 09:33:27 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.540 09:33:27 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.540 09:33:27 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.540 09:33:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.540 09:33:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.540 09:33:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.540 09:33:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.540 09:33:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.540 09:33:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.540 09:33:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.540 09:33:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.540 09:33:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.540 09:33:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.540 09:33:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.540 09:33:27 event -- scripts/common.sh@344 -- # case "$op" in 00:04:59.540 09:33:27 event -- scripts/common.sh@345 -- # : 1 00:04:59.540 09:33:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.540 09:33:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.540 09:33:27 event -- scripts/common.sh@365 -- # decimal 1 00:04:59.540 09:33:27 event -- scripts/common.sh@353 -- # local d=1 00:04:59.540 09:33:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.540 09:33:27 event -- scripts/common.sh@355 -- # echo 1 00:04:59.540 09:33:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.540 09:33:27 event -- scripts/common.sh@366 -- # decimal 2 00:04:59.540 09:33:27 event -- scripts/common.sh@353 -- # local d=2 00:04:59.540 09:33:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.540 09:33:27 event -- scripts/common.sh@355 -- # echo 2 00:04:59.540 09:33:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.540 09:33:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.540 09:33:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.540 09:33:27 event -- scripts/common.sh@368 -- # return 0 00:04:59.540 09:33:27 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.541 09:33:27 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.541 --rc genhtml_branch_coverage=1 00:04:59.541 --rc genhtml_function_coverage=1 00:04:59.541 --rc genhtml_legend=1 00:04:59.541 --rc geninfo_all_blocks=1 00:04:59.541 --rc geninfo_unexecuted_blocks=1 00:04:59.541 00:04:59.541 ' 00:04:59.541 09:33:27 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.541 --rc genhtml_branch_coverage=1 00:04:59.541 --rc genhtml_function_coverage=1 00:04:59.541 --rc genhtml_legend=1 00:04:59.541 --rc geninfo_all_blocks=1 00:04:59.541 --rc geninfo_unexecuted_blocks=1 00:04:59.541 00:04:59.541 ' 00:04:59.541 09:33:27 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.541 --rc genhtml_branch_coverage=1 00:04:59.541 --rc genhtml_function_coverage=1 00:04:59.541 --rc genhtml_legend=1 00:04:59.541 --rc geninfo_all_blocks=1 00:04:59.541 --rc geninfo_unexecuted_blocks=1 00:04:59.541 00:04:59.541 ' 00:04:59.541 09:33:27 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.541 --rc genhtml_branch_coverage=1 00:04:59.541 --rc genhtml_function_coverage=1 00:04:59.541 --rc genhtml_legend=1 00:04:59.541 --rc geninfo_all_blocks=1 00:04:59.541 --rc geninfo_unexecuted_blocks=1 00:04:59.541 00:04:59.541 ' 00:04:59.541 09:33:27 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:59.541 09:33:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.541 09:33:27 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.541 09:33:27 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:59.541 09:33:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.541 09:33:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.541 ************************************ 00:04:59.541 START TEST event_perf 00:04:59.541 ************************************ 00:04:59.541 09:33:27 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.802 Running I/O for 1 seconds...[2024-11-07 09:33:27.222079] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:59.802 [2024-11-07 09:33:27.222192] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:04:59.802 [2024-11-07 09:33:27.386994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.064 [2024-11-07 09:33:27.514811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.064 [2024-11-07 09:33:27.515146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.064 [2024-11-07 09:33:27.515355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.064 [2024-11-07 09:33:27.515357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.036 Running I/O for 1 seconds... 00:05:01.036 lcore 0: 161440 00:05:01.036 lcore 1: 161440 00:05:01.036 lcore 2: 161441 00:05:01.036 lcore 3: 161440 00:05:01.036 done. 00:05:01.036 00:05:01.036 real 0m1.506s 00:05:01.036 user 0m4.284s 00:05:01.036 sys 0m0.099s 00:05:01.036 09:33:28 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.036 ************************************ 00:05:01.036 09:33:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.036 END TEST event_perf 00:05:01.036 ************************************ 00:05:01.294 09:33:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.294 09:33:28 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:01.294 09:33:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.294 09:33:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.294 ************************************ 00:05:01.294 START TEST event_reactor 00:05:01.294 ************************************ 00:05:01.294 09:33:28 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.294 [2024-11-07 09:33:28.791084] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:01.294 [2024-11-07 09:33:28.791315] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58287 ] 00:05:01.294 [2024-11-07 09:33:28.947014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.552 [2024-11-07 09:33:29.047455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.932 test_start 00:05:02.932 oneshot 00:05:02.932 tick 100 00:05:02.932 tick 100 00:05:02.932 tick 250 00:05:02.932 tick 100 00:05:02.932 tick 100 00:05:02.932 tick 100 00:05:02.932 tick 250 00:05:02.932 tick 500 00:05:02.932 tick 100 00:05:02.932 tick 100 00:05:02.932 tick 250 00:05:02.932 tick 100 00:05:02.932 tick 100 00:05:02.932 test_end 00:05:02.932 ************************************ 00:05:02.932 END TEST event_reactor 00:05:02.932 ************************************ 00:05:02.932 00:05:02.932 real 0m1.441s 00:05:02.932 user 0m1.268s 00:05:02.932 sys 0m0.065s 00:05:02.932 09:33:30 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.932 09:33:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:02.932 09:33:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.932 09:33:30 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:02.932 09:33:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.932 09:33:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.932 ************************************ 00:05:02.932 START TEST event_reactor_perf 00:05:02.932 ************************************ 00:05:02.932 09:33:30 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.932 [2024-11-07 09:33:30.293962] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:02.932 [2024-11-07 09:33:30.294088] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58324 ] 00:05:02.932 [2024-11-07 09:33:30.455215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.932 [2024-11-07 09:33:30.556213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.318 test_start 00:05:04.318 test_end 00:05:04.318 Performance: 317035 events per second 00:05:04.318 ************************************ 00:05:04.318 END TEST event_reactor_perf 00:05:04.318 ************************************ 00:05:04.318 00:05:04.318 real 0m1.445s 00:05:04.318 user 0m1.271s 00:05:04.318 sys 0m0.065s 00:05:04.318 09:33:31 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.318 09:33:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.318 09:33:31 event -- event/event.sh@49 -- # uname -s 00:05:04.318 09:33:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:04.318 09:33:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.318 09:33:31 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.318 09:33:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.318 09:33:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.318 ************************************ 00:05:04.318 START TEST event_scheduler 00:05:04.318 ************************************ 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:04.318 * Looking for test storage... 00:05:04.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.318 09:33:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.318 --rc genhtml_branch_coverage=1 00:05:04.318 --rc genhtml_function_coverage=1 00:05:04.318 --rc genhtml_legend=1 00:05:04.318 --rc geninfo_all_blocks=1 00:05:04.318 --rc geninfo_unexecuted_blocks=1 00:05:04.318 00:05:04.318 ' 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.318 --rc genhtml_branch_coverage=1 00:05:04.318 --rc genhtml_function_coverage=1 00:05:04.318 --rc genhtml_legend=1 00:05:04.318 --rc geninfo_all_blocks=1 00:05:04.318 --rc geninfo_unexecuted_blocks=1 00:05:04.318 00:05:04.318 ' 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.318 --rc genhtml_branch_coverage=1 00:05:04.318 --rc genhtml_function_coverage=1 00:05:04.318 --rc genhtml_legend=1 00:05:04.318 --rc geninfo_all_blocks=1 00:05:04.318 --rc geninfo_unexecuted_blocks=1 00:05:04.318 00:05:04.318 ' 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.318 --rc genhtml_branch_coverage=1 00:05:04.318 --rc genhtml_function_coverage=1 00:05:04.318 --rc genhtml_legend=1 00:05:04.318 --rc geninfo_all_blocks=1 00:05:04.318 --rc geninfo_unexecuted_blocks=1 00:05:04.318 00:05:04.318 ' 00:05:04.318 09:33:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.318 09:33:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58400 00:05:04.318 09:33:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.318 09:33:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58400 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58400 ']' 00:05:04.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.318 09:33:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.318 09:33:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.318 [2024-11-07 09:33:31.980621] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:04.318 [2024-11-07 09:33:31.980763] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58400 ] 00:05:04.578 [2024-11-07 09:33:32.142723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.578 [2024-11-07 09:33:32.245275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.578 [2024-11-07 09:33:32.245684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.578 [2024-11-07 09:33:32.246160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.839 [2024-11-07 09:33:32.246240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:05.411 09:33:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.411 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.411 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.411 POWER: Cannot set governor of lcore 0 to performance 00:05:05.411 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.411 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.411 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:05.411 POWER: Cannot set governor of lcore 0 to userspace 00:05:05.411 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:05.411 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:05.411 POWER: Unable to set Power Management Environment for lcore 0 00:05:05.411 [2024-11-07 09:33:32.784740] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:05.411 [2024-11-07 09:33:32.784778] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:05.411 [2024-11-07 09:33:32.784804] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:05.411 [2024-11-07 09:33:32.784876] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:05.411 [2024-11-07 09:33:32.784902] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:05.411 [2024-11-07 09:33:32.784925] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 [2024-11-07 09:33:33.009512] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:05.411 09:33:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:05.411 09:33:33 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.411 09:33:33 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 ************************************ 00:05:05.411 START TEST scheduler_create_thread 00:05:05.411 ************************************ 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 2 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 3 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 4 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 5 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 6 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 7 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.411 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.671 8 00:05:05.671 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.671 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 9 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 10 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.672 09:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.614 09:33:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.614 09:33:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.614 09:33:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.614 09:33:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.614 09:33:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.555 ************************************ 00:05:07.555 END TEST scheduler_create_thread 00:05:07.555 ************************************ 00:05:07.556 09:33:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.556 00:05:07.556 real 0m2.136s 00:05:07.556 user 0m0.015s 00:05:07.556 sys 0m0.006s 00:05:07.556 09:33:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.556 09:33:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.556 09:33:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.556 09:33:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58400 00:05:07.556 09:33:35 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58400 ']' 00:05:07.556 09:33:35 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58400 00:05:07.556 09:33:35 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:07.556 09:33:35 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.816 09:33:35 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58400 00:05:07.816 killing process with pid 58400 00:05:07.816 09:33:35 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:07.816 09:33:35 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:07.816 09:33:35 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58400' 00:05:07.816 09:33:35 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58400 00:05:07.816 09:33:35 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58400 00:05:08.078 [2024-11-07 09:33:35.641734] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.039 00:05:09.039 real 0m4.753s 00:05:09.039 user 0m7.966s 00:05:09.039 sys 0m0.367s 00:05:09.039 09:33:36 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.039 ************************************ 00:05:09.039 END TEST event_scheduler 00:05:09.039 ************************************ 00:05:09.039 09:33:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.039 09:33:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.039 09:33:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.039 09:33:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.039 09:33:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.039 09:33:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.039 ************************************ 00:05:09.039 START TEST app_repeat 00:05:09.039 ************************************ 00:05:09.039 09:33:36 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.039 09:33:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58495 00:05:09.039 Process app_repeat pid: 58495 00:05:09.039 spdk_app_start Round 0 00:05:09.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.040 09:33:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.040 09:33:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.040 09:33:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58495' 00:05:09.040 09:33:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.040 09:33:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.040 09:33:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58495 /var/tmp/spdk-nbd.sock 00:05:09.040 09:33:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58495 ']' 00:05:09.040 09:33:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.040 09:33:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.040 09:33:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.040 09:33:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.040 09:33:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.040 [2024-11-07 09:33:36.653064] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:09.040 [2024-11-07 09:33:36.653200] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58495 ] 00:05:09.300 [2024-11-07 09:33:36.818788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.300 [2024-11-07 09:33:36.950067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.300 [2024-11-07 09:33:36.950143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.244 09:33:37 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.244 09:33:37 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:10.244 09:33:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.244 Malloc0 00:05:10.244 09:33:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.505 Malloc1 00:05:10.505 09:33:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.505 09:33:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.505 09:33:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.505 09:33:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.505 09:33:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.505 09:33:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.507 09:33:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.769 /dev/nbd0 00:05:10.769 09:33:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.769 09:33:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.769 1+0 records in 00:05:10.769 1+0 records out 00:05:10.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322672 s, 12.7 MB/s 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:10.769 09:33:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:10.769 09:33:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.769 09:33:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.769 09:33:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.030 /dev/nbd1 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.030 1+0 records in 00:05:11.030 1+0 records out 00:05:11.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448719 s, 9.1 MB/s 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.030 09:33:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.030 09:33:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.292 { 00:05:11.292 "nbd_device": "/dev/nbd0", 00:05:11.292 "bdev_name": "Malloc0" 00:05:11.292 }, 00:05:11.292 { 00:05:11.292 "nbd_device": "/dev/nbd1", 00:05:11.292 "bdev_name": "Malloc1" 00:05:11.292 } 00:05:11.292 ]' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.292 { 00:05:11.292 "nbd_device": "/dev/nbd0", 00:05:11.292 "bdev_name": "Malloc0" 00:05:11.292 }, 00:05:11.292 { 00:05:11.292 "nbd_device": "/dev/nbd1", 00:05:11.292 "bdev_name": "Malloc1" 00:05:11.292 } 00:05:11.292 ]' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.292 /dev/nbd1' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.292 /dev/nbd1' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.292 256+0 records in 00:05:11.292 256+0 records out 00:05:11.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680563 s, 154 MB/s 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.292 256+0 records in 00:05:11.292 256+0 records out 00:05:11.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156087 s, 67.2 MB/s 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.292 256+0 records in 00:05:11.292 256+0 records out 00:05:11.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209319 s, 50.1 MB/s 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.292 09:33:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.553 09:33:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.554 09:33:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.554 09:33:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.554 09:33:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.814 09:33:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.075 09:33:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.076 09:33:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.336 09:33:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.902 [2024-11-07 09:33:40.531121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.158 [2024-11-07 09:33:40.599731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.158 [2024-11-07 09:33:40.599732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.159 [2024-11-07 09:33:40.702501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.159 [2024-11-07 09:33:40.702555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.686 spdk_app_start Round 1 00:05:15.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.686 09:33:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.686 09:33:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.686 09:33:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58495 /var/tmp/spdk-nbd.sock 00:05:15.686 09:33:42 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58495 ']' 00:05:15.686 09:33:42 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.686 09:33:42 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.686 09:33:42 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.686 09:33:42 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.686 09:33:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.686 09:33:43 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.686 09:33:43 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:15.686 09:33:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.686 Malloc0 00:05:15.944 09:33:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.944 Malloc1 00:05:15.944 09:33:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.944 09:33:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.202 /dev/nbd0 00:05:16.202 09:33:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.202 09:33:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.202 1+0 records in 00:05:16.202 1+0 records out 00:05:16.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453012 s, 9.0 MB/s 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:16.202 09:33:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:16.202 09:33:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.202 09:33:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.202 09:33:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.460 /dev/nbd1 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.460 1+0 records in 00:05:16.460 1+0 records out 00:05:16.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263072 s, 15.6 MB/s 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:16.460 09:33:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.460 09:33:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.717 { 00:05:16.717 "nbd_device": "/dev/nbd0", 00:05:16.717 "bdev_name": "Malloc0" 00:05:16.717 }, 00:05:16.717 { 00:05:16.717 "nbd_device": "/dev/nbd1", 00:05:16.717 "bdev_name": "Malloc1" 00:05:16.717 } 00:05:16.717 ]' 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.717 { 00:05:16.717 "nbd_device": "/dev/nbd0", 00:05:16.717 "bdev_name": "Malloc0" 00:05:16.717 }, 00:05:16.717 { 00:05:16.717 "nbd_device": "/dev/nbd1", 00:05:16.717 "bdev_name": "Malloc1" 00:05:16.717 } 00:05:16.717 ]' 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.717 /dev/nbd1' 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.717 /dev/nbd1' 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.717 09:33:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.718 256+0 records in 00:05:16.718 256+0 records out 00:05:16.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692489 s, 151 MB/s 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.718 256+0 records in 00:05:16.718 256+0 records out 00:05:16.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163326 s, 64.2 MB/s 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.718 256+0 records in 00:05:16.718 256+0 records out 00:05:16.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199303 s, 52.6 MB/s 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.718 09:33:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.975 09:33:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.233 09:33:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.490 09:33:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.490 09:33:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.746 09:33:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.312 [2024-11-07 09:33:45.901763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.312 [2024-11-07 09:33:45.975659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.312 [2024-11-07 09:33:45.975680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.570 [2024-11-07 09:33:46.076058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.570 [2024-11-07 09:33:46.076102] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.098 spdk_app_start Round 2 00:05:21.098 09:33:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.098 09:33:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.098 09:33:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58495 /var/tmp/spdk-nbd.sock 00:05:21.098 09:33:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58495 ']' 00:05:21.098 09:33:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.098 09:33:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.098 09:33:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.099 09:33:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.099 09:33:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.099 09:33:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.099 09:33:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:21.099 09:33:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.099 Malloc0 00:05:21.357 09:33:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.357 Malloc1 00:05:21.357 09:33:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.357 09:33:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.616 /dev/nbd0 00:05:21.616 09:33:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.616 09:33:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.616 1+0 records in 00:05:21.616 1+0 records out 00:05:21.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143341 s, 28.6 MB/s 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:21.616 09:33:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:21.616 09:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.616 09:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.616 09:33:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.875 /dev/nbd1 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.875 1+0 records in 00:05:21.875 1+0 records out 00:05:21.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027931 s, 14.7 MB/s 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:21.875 09:33:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.875 09:33:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.133 09:33:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.134 { 00:05:22.134 "nbd_device": "/dev/nbd0", 00:05:22.134 "bdev_name": "Malloc0" 00:05:22.134 }, 00:05:22.134 { 00:05:22.134 "nbd_device": "/dev/nbd1", 00:05:22.134 "bdev_name": "Malloc1" 00:05:22.134 } 00:05:22.134 ]' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.134 { 00:05:22.134 "nbd_device": "/dev/nbd0", 00:05:22.134 "bdev_name": "Malloc0" 00:05:22.134 }, 00:05:22.134 { 00:05:22.134 "nbd_device": "/dev/nbd1", 00:05:22.134 "bdev_name": "Malloc1" 00:05:22.134 } 00:05:22.134 ]' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.134 /dev/nbd1' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.134 /dev/nbd1' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.134 256+0 records in 00:05:22.134 256+0 records out 00:05:22.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782055 s, 134 MB/s 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.134 256+0 records in 00:05:22.134 256+0 records out 00:05:22.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018951 s, 55.3 MB/s 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.134 256+0 records in 00:05:22.134 256+0 records out 00:05:22.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017257 s, 60.8 MB/s 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.134 09:33:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.391 09:33:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.391 09:33:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.649 09:33:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.907 09:33:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.907 09:33:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.165 09:33:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.731 [2024-11-07 09:33:51.304354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.732 [2024-11-07 09:33:51.378822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.732 [2024-11-07 09:33:51.378900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.990 [2024-11-07 09:33:51.478067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.990 [2024-11-07 09:33:51.478124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.548 09:33:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58495 /var/tmp/spdk-nbd.sock 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58495 ']' 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:26.548 09:33:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58495 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58495 ']' 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58495 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.548 09:33:53 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58495 00:05:26.548 killing process with pid 58495 00:05:26.548 09:33:54 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.548 09:33:54 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.548 09:33:54 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58495' 00:05:26.548 09:33:54 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58495 00:05:26.548 09:33:54 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58495 00:05:27.115 spdk_app_start is called in Round 0. 00:05:27.115 Shutdown signal received, stop current app iteration 00:05:27.115 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:05:27.115 spdk_app_start is called in Round 1. 00:05:27.115 Shutdown signal received, stop current app iteration 00:05:27.115 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:05:27.115 spdk_app_start is called in Round 2. 00:05:27.115 Shutdown signal received, stop current app iteration 00:05:27.115 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:05:27.115 spdk_app_start is called in Round 3. 00:05:27.115 Shutdown signal received, stop current app iteration 00:05:27.115 09:33:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.115 09:33:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.115 00:05:27.115 real 0m17.932s 00:05:27.115 user 0m39.239s 00:05:27.115 sys 0m2.176s 00:05:27.115 09:33:54 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.115 ************************************ 00:05:27.115 END TEST app_repeat 00:05:27.115 ************************************ 00:05:27.115 09:33:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.115 09:33:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.115 09:33:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.115 09:33:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.115 09:33:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.115 09:33:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.115 ************************************ 00:05:27.115 START TEST cpu_locks 00:05:27.115 ************************************ 00:05:27.115 09:33:54 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.115 * Looking for test storage... 00:05:27.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.115 09:33:54 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.115 09:33:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.115 09:33:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.115 09:33:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.115 09:33:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.116 09:33:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.116 --rc genhtml_branch_coverage=1 00:05:27.116 --rc genhtml_function_coverage=1 00:05:27.116 --rc genhtml_legend=1 00:05:27.116 --rc geninfo_all_blocks=1 00:05:27.116 --rc geninfo_unexecuted_blocks=1 00:05:27.116 00:05:27.116 ' 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.116 --rc genhtml_branch_coverage=1 00:05:27.116 --rc genhtml_function_coverage=1 00:05:27.116 --rc genhtml_legend=1 00:05:27.116 --rc geninfo_all_blocks=1 00:05:27.116 --rc geninfo_unexecuted_blocks=1 00:05:27.116 00:05:27.116 ' 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.116 --rc genhtml_branch_coverage=1 00:05:27.116 --rc genhtml_function_coverage=1 00:05:27.116 --rc genhtml_legend=1 00:05:27.116 --rc geninfo_all_blocks=1 00:05:27.116 --rc geninfo_unexecuted_blocks=1 00:05:27.116 00:05:27.116 ' 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.116 --rc genhtml_branch_coverage=1 00:05:27.116 --rc genhtml_function_coverage=1 00:05:27.116 --rc genhtml_legend=1 00:05:27.116 --rc geninfo_all_blocks=1 00:05:27.116 --rc geninfo_unexecuted_blocks=1 00:05:27.116 00:05:27.116 ' 00:05:27.116 09:33:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.116 09:33:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.116 09:33:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.116 09:33:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.116 09:33:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.116 ************************************ 00:05:27.116 START TEST default_locks 00:05:27.116 ************************************ 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58931 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58931 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58931 ']' 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.116 09:33:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.376 [2024-11-07 09:33:54.828353] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:27.376 [2024-11-07 09:33:54.828649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58931 ] 00:05:27.376 [2024-11-07 09:33:54.989564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.636 [2024-11-07 09:33:55.092006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.209 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.209 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:28.209 09:33:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58931 00:05:28.209 09:33:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.209 09:33:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58931 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58931 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58931 ']' 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58931 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58931 00:05:28.467 killing process with pid 58931 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58931' 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58931 00:05:28.467 09:33:55 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58931 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58931 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58931 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58931 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58931 ']' 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.888 ERROR: process (pid: 58931) is no longer running 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58931) - No such process 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.888 00:05:29.888 real 0m2.699s 00:05:29.888 user 0m2.679s 00:05:29.888 sys 0m0.456s 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.888 ************************************ 00:05:29.888 END TEST default_locks 00:05:29.888 ************************************ 00:05:29.888 09:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.888 09:33:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.888 09:33:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.888 09:33:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.888 09:33:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.888 ************************************ 00:05:29.888 START TEST default_locks_via_rpc 00:05:29.888 ************************************ 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58995 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58995 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58995 ']' 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.888 09:33:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.148 [2024-11-07 09:33:57.575751] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:30.148 [2024-11-07 09:33:57.575879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58995 ] 00:05:30.148 [2024-11-07 09:33:57.738737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.410 [2024-11-07 09:33:57.840942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58995 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58995 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58995 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58995 ']' 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58995 00:05:30.982 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58995 00:05:30.983 killing process with pid 58995 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58995' 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58995 00:05:30.983 09:33:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58995 00:05:32.895 ************************************ 00:05:32.895 END TEST default_locks_via_rpc 00:05:32.895 ************************************ 00:05:32.895 00:05:32.895 real 0m2.650s 00:05:32.895 user 0m2.675s 00:05:32.895 sys 0m0.451s 00:05:32.895 09:34:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.895 09:34:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.895 09:34:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.895 09:34:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.895 09:34:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.895 09:34:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.895 ************************************ 00:05:32.895 START TEST non_locking_app_on_locked_coremask 00:05:32.895 ************************************ 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59047 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59047 /var/tmp/spdk.sock 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59047 ']' 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.895 09:34:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.895 [2024-11-07 09:34:00.291113] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:32.895 [2024-11-07 09:34:00.291447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59047 ] 00:05:32.895 [2024-11-07 09:34:00.452852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.895 [2024-11-07 09:34:00.556682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59063 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59063 /var/tmp/spdk2.sock 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59063 ']' 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.837 09:34:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.837 [2024-11-07 09:34:01.215222] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:33.837 [2024-11-07 09:34:01.215556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59063 ] 00:05:33.837 [2024-11-07 09:34:01.392896] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.837 [2024-11-07 09:34:01.392947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.097 [2024-11-07 09:34:01.613846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.034 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.034 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:35.034 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59047 00:05:35.034 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.034 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59047 00:05:35.292 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59047 00:05:35.292 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59047 ']' 00:05:35.292 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59047 00:05:35.292 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:35.292 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.292 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59047 00:05:35.550 killing process with pid 59047 00:05:35.550 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.550 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.550 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59047' 00:05:35.550 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59047 00:05:35.550 09:34:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59047 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59063 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59063 ']' 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59063 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59063 00:05:38.081 killing process with pid 59063 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59063' 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59063 00:05:38.081 09:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59063 00:05:39.045 00:05:39.045 real 0m6.316s 00:05:39.045 user 0m6.546s 00:05:39.045 sys 0m0.851s 00:05:39.045 09:34:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.045 09:34:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.045 ************************************ 00:05:39.045 END TEST non_locking_app_on_locked_coremask 00:05:39.045 ************************************ 00:05:39.045 09:34:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:39.045 09:34:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.045 09:34:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.045 09:34:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.045 ************************************ 00:05:39.045 START TEST locking_app_on_unlocked_coremask 00:05:39.045 ************************************ 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:39.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59160 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59160 /var/tmp/spdk.sock 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59160 ']' 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.045 09:34:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:39.045 [2024-11-07 09:34:06.657343] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:39.045 [2024-11-07 09:34:06.657479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:05:39.303 [2024-11-07 09:34:06.818537] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.303 [2024-11-07 09:34:06.818590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.303 [2024-11-07 09:34:06.929999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59171 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59171 /var/tmp/spdk2.sock 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59171 ']' 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.236 09:34:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.236 [2024-11-07 09:34:07.646516] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:40.236 [2024-11-07 09:34:07.646846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:05:40.236 [2024-11-07 09:34:07.821271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.504 [2024-11-07 09:34:08.053081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.883 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.883 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:41.883 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59171 00:05:41.883 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59171 00:05:41.883 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59160 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59160 ']' 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59160 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59160 00:05:42.141 killing process with pid 59160 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59160' 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59160 00:05:42.141 09:34:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59160 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59171 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59171 ']' 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59171 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59171 00:05:44.668 killing process with pid 59171 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59171' 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59171 00:05:44.668 09:34:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59171 00:05:46.069 ************************************ 00:05:46.069 END TEST locking_app_on_unlocked_coremask 00:05:46.069 ************************************ 00:05:46.069 00:05:46.069 real 0m6.723s 00:05:46.069 user 0m6.901s 00:05:46.069 sys 0m0.920s 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.069 09:34:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.069 09:34:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.069 09:34:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.069 09:34:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.069 ************************************ 00:05:46.069 START TEST locking_app_on_locked_coremask 00:05:46.069 ************************************ 00:05:46.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59272 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59272 /var/tmp/spdk.sock 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59272 ']' 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.069 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.070 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.070 09:34:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.070 [2024-11-07 09:34:13.438044] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:46.070 [2024-11-07 09:34:13.438178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:05:46.070 [2024-11-07 09:34:13.598091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.070 [2024-11-07 09:34:13.700690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59288 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59288 /var/tmp/spdk2.sock 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59288 /var/tmp/spdk2.sock 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59288 /var/tmp/spdk2.sock 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59288 ']' 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.671 09:34:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.931 [2024-11-07 09:34:14.398729] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:46.931 [2024-11-07 09:34:14.399307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:05:46.931 [2024-11-07 09:34:14.579676] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59272 has claimed it. 00:05:46.931 [2024-11-07 09:34:14.579746] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.501 ERROR: process (pid: 59288) is no longer running 00:05:47.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59288) - No such process 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59272 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.501 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59272 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59272 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59272 ']' 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59272 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59272 00:05:47.759 killing process with pid 59272 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59272' 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59272 00:05:47.759 09:34:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59272 00:05:49.133 00:05:49.133 real 0m3.401s 00:05:49.133 user 0m3.599s 00:05:49.133 sys 0m0.569s 00:05:49.133 09:34:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.133 09:34:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.133 ************************************ 00:05:49.133 END TEST locking_app_on_locked_coremask 00:05:49.133 ************************************ 00:05:49.133 09:34:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.133 09:34:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.133 09:34:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.133 09:34:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.133 ************************************ 00:05:49.133 START TEST locking_overlapped_coremask 00:05:49.133 ************************************ 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:49.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59347 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59347 /var/tmp/spdk.sock 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59347 ']' 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.133 09:34:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.391 [2024-11-07 09:34:16.870649] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:49.391 [2024-11-07 09:34:16.870768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59347 ] 00:05:49.391 [2024-11-07 09:34:17.026550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.649 [2024-11-07 09:34:17.130481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.649 [2024-11-07 09:34:17.130784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.650 [2024-11-07 09:34:17.130786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59359 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59359 /var/tmp/spdk2.sock 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59359 /var/tmp/spdk2.sock 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59359 /var/tmp/spdk2.sock 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59359 ']' 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.215 09:34:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.215 [2024-11-07 09:34:17.799264] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:50.215 [2024-11-07 09:34:17.799378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59359 ] 00:05:50.474 [2024-11-07 09:34:17.974617] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59347 has claimed it. 00:05:50.474 [2024-11-07 09:34:17.978713] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.040 ERROR: process (pid: 59359) is no longer running 00:05:51.040 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59359) - No such process 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59347 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59347 ']' 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59347 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59347 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59347' 00:05:51.040 killing process with pid 59347 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59347 00:05:51.040 09:34:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59347 00:05:52.414 00:05:52.414 real 0m3.125s 00:05:52.414 user 0m8.511s 00:05:52.414 sys 0m0.448s 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.414 ************************************ 00:05:52.414 END TEST locking_overlapped_coremask 00:05:52.414 ************************************ 00:05:52.414 09:34:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.414 09:34:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.414 09:34:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.414 09:34:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.414 ************************************ 00:05:52.414 START TEST locking_overlapped_coremask_via_rpc 00:05:52.414 ************************************ 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59418 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59418 /var/tmp/spdk.sock 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59418 ']' 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.414 09:34:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.414 [2024-11-07 09:34:20.054753] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:52.414 [2024-11-07 09:34:20.054889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59418 ] 00:05:52.673 [2024-11-07 09:34:20.218840] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.673 [2024-11-07 09:34:20.218999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.674 [2024-11-07 09:34:20.326964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.674 [2024-11-07 09:34:20.327273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.674 [2024-11-07 09:34:20.327296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59436 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59436 /var/tmp/spdk2.sock 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59436 ']' 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.615 09:34:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:53.615 [2024-11-07 09:34:21.004009] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:53.616 [2024-11-07 09:34:21.004721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59436 ] 00:05:53.616 [2024-11-07 09:34:21.170591] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.616 [2024-11-07 09:34:21.170643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.876 [2024-11-07 09:34:21.370241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.876 [2024-11-07 09:34:21.373792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.876 [2024-11-07 09:34:21.373808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.816 [2024-11-07 09:34:22.417821] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59418 has claimed it. 00:05:54.816 request: 00:05:54.816 { 00:05:54.816 "method": "framework_enable_cpumask_locks", 00:05:54.816 "req_id": 1 00:05:54.816 } 00:05:54.816 Got JSON-RPC error response 00:05:54.816 response: 00:05:54.816 { 00:05:54.816 "code": -32603, 00:05:54.816 "message": "Failed to claim CPU core: 2" 00:05:54.816 } 00:05:54.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59418 /var/tmp/spdk.sock 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59418 ']' 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.816 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.075 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:55.075 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59436 /var/tmp/spdk2.sock 00:05:55.075 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59436 ']' 00:05:55.075 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.075 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.076 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.076 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.076 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.340 00:05:55.340 real 0m2.871s 00:05:55.340 user 0m1.087s 00:05:55.340 sys 0m0.123s 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.340 09:34:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.340 ************************************ 00:05:55.340 END TEST locking_overlapped_coremask_via_rpc 00:05:55.340 ************************************ 00:05:55.340 09:34:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:55.340 09:34:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59418 ]] 00:05:55.340 09:34:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59418 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59418 ']' 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59418 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59418 00:05:55.340 killing process with pid 59418 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59418' 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59418 00:05:55.340 09:34:22 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59418 00:05:57.263 09:34:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59436 ]] 00:05:57.263 09:34:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59436 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59436 ']' 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59436 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59436 00:05:57.263 killing process with pid 59436 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59436' 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59436 00:05:57.263 09:34:24 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59436 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.205 Process with pid 59418 is not found 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59418 ]] 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59418 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59418 ']' 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59418 00:05:58.205 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59418) - No such process 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59418 is not found' 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59436 ]] 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59436 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59436 ']' 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59436 00:05:58.205 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59436) - No such process 00:05:58.205 Process with pid 59436 is not found 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59436 is not found' 00:05:58.205 09:34:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:58.205 ************************************ 00:05:58.205 END TEST cpu_locks 00:05:58.205 ************************************ 00:05:58.205 00:05:58.205 real 0m31.115s 00:05:58.205 user 0m53.507s 00:05:58.205 sys 0m4.701s 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.205 09:34:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.205 ************************************ 00:05:58.205 END TEST event 00:05:58.205 ************************************ 00:05:58.205 00:05:58.205 real 0m58.721s 00:05:58.205 user 1m47.691s 00:05:58.205 sys 0m7.748s 00:05:58.205 09:34:25 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.205 09:34:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.205 09:34:25 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.205 09:34:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.205 09:34:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.205 09:34:25 -- common/autotest_common.sh@10 -- # set +x 00:05:58.205 ************************************ 00:05:58.205 START TEST thread 00:05:58.205 ************************************ 00:05:58.205 09:34:25 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:58.205 * Looking for test storage... 00:05:58.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:58.205 09:34:25 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:58.205 09:34:25 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:58.205 09:34:25 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.467 09:34:25 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.467 09:34:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.467 09:34:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.467 09:34:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.467 09:34:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.467 09:34:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.467 09:34:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.467 09:34:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.467 09:34:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.467 09:34:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.467 09:34:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.467 09:34:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.467 09:34:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:58.467 09:34:25 thread -- scripts/common.sh@345 -- # : 1 00:05:58.467 09:34:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.467 09:34:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.467 09:34:25 thread -- scripts/common.sh@365 -- # decimal 1 00:05:58.467 09:34:25 thread -- scripts/common.sh@353 -- # local d=1 00:05:58.467 09:34:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.467 09:34:25 thread -- scripts/common.sh@355 -- # echo 1 00:05:58.467 09:34:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.467 09:34:25 thread -- scripts/common.sh@366 -- # decimal 2 00:05:58.467 09:34:25 thread -- scripts/common.sh@353 -- # local d=2 00:05:58.467 09:34:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.467 09:34:25 thread -- scripts/common.sh@355 -- # echo 2 00:05:58.467 09:34:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.467 09:34:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.467 09:34:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.467 09:34:25 thread -- scripts/common.sh@368 -- # return 0 00:05:58.467 09:34:25 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.467 09:34:25 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.467 --rc genhtml_branch_coverage=1 00:05:58.467 --rc genhtml_function_coverage=1 00:05:58.467 --rc genhtml_legend=1 00:05:58.467 --rc geninfo_all_blocks=1 00:05:58.467 --rc geninfo_unexecuted_blocks=1 00:05:58.467 00:05:58.467 ' 00:05:58.467 09:34:25 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.467 --rc genhtml_branch_coverage=1 00:05:58.467 --rc genhtml_function_coverage=1 00:05:58.467 --rc genhtml_legend=1 00:05:58.467 --rc geninfo_all_blocks=1 00:05:58.467 --rc geninfo_unexecuted_blocks=1 00:05:58.467 00:05:58.467 ' 00:05:58.467 09:34:25 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.468 --rc genhtml_branch_coverage=1 00:05:58.468 --rc genhtml_function_coverage=1 00:05:58.468 --rc genhtml_legend=1 00:05:58.468 --rc geninfo_all_blocks=1 00:05:58.468 --rc geninfo_unexecuted_blocks=1 00:05:58.468 00:05:58.468 ' 00:05:58.468 09:34:25 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.468 --rc genhtml_branch_coverage=1 00:05:58.468 --rc genhtml_function_coverage=1 00:05:58.468 --rc genhtml_legend=1 00:05:58.468 --rc geninfo_all_blocks=1 00:05:58.468 --rc geninfo_unexecuted_blocks=1 00:05:58.468 00:05:58.468 ' 00:05:58.468 09:34:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.468 09:34:25 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:58.468 09:34:25 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.468 09:34:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.468 ************************************ 00:05:58.468 START TEST thread_poller_perf 00:05:58.468 ************************************ 00:05:58.468 09:34:25 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:58.468 [2024-11-07 09:34:25.968826] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:58.468 [2024-11-07 09:34:25.969059] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59596 ] 00:05:58.468 [2024-11-07 09:34:26.129878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.729 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.729 [2024-11-07 09:34:26.224439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.113 [2024-11-07T09:34:27.784Z] ====================================== 00:06:00.113 [2024-11-07T09:34:27.784Z] busy:2615714992 (cyc) 00:06:00.113 [2024-11-07T09:34:27.784Z] total_run_count: 307000 00:06:00.113 [2024-11-07T09:34:27.784Z] tsc_hz: 2600000000 (cyc) 00:06:00.113 [2024-11-07T09:34:27.784Z] ====================================== 00:06:00.113 [2024-11-07T09:34:27.784Z] poller_cost: 8520 (cyc), 3276 (nsec) 00:06:00.113 00:06:00.113 real 0m1.452s 00:06:00.113 user 0m1.281s 00:06:00.113 sys 0m0.063s 00:06:00.113 09:34:27 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.113 09:34:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.113 ************************************ 00:06:00.113 END TEST thread_poller_perf 00:06:00.113 ************************************ 00:06:00.113 09:34:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.113 09:34:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:00.113 09:34:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.113 09:34:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.113 ************************************ 00:06:00.113 START TEST thread_poller_perf 00:06:00.113 ************************************ 00:06:00.113 09:34:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.113 [2024-11-07 09:34:27.480493] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:00.113 [2024-11-07 09:34:27.481026] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:06:00.113 [2024-11-07 09:34:27.642513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.113 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.113 [2024-11-07 09:34:27.738340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.490 [2024-11-07T09:34:29.161Z] ====================================== 00:06:01.490 [2024-11-07T09:34:29.161Z] busy:2603328254 (cyc) 00:06:01.490 [2024-11-07T09:34:29.161Z] total_run_count: 3971000 00:06:01.490 [2024-11-07T09:34:29.161Z] tsc_hz: 2600000000 (cyc) 00:06:01.490 [2024-11-07T09:34:29.161Z] ====================================== 00:06:01.490 [2024-11-07T09:34:29.161Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:01.490 00:06:01.490 real 0m1.440s 00:06:01.490 user 0m1.257s 00:06:01.490 sys 0m0.076s 00:06:01.490 09:34:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.490 09:34:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.490 ************************************ 00:06:01.490 END TEST thread_poller_perf 00:06:01.490 ************************************ 00:06:01.490 09:34:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.490 ************************************ 00:06:01.490 END TEST thread 00:06:01.490 ************************************ 00:06:01.490 00:06:01.490 real 0m3.141s 00:06:01.490 user 0m2.657s 00:06:01.490 sys 0m0.261s 00:06:01.491 09:34:28 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.491 09:34:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.491 09:34:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:01.491 09:34:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.491 09:34:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.491 09:34:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.491 09:34:28 -- common/autotest_common.sh@10 -- # set +x 00:06:01.491 ************************************ 00:06:01.491 START TEST app_cmdline 00:06:01.491 ************************************ 00:06:01.491 09:34:28 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.491 * Looking for test storage... 00:06:01.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.491 09:34:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.491 --rc genhtml_branch_coverage=1 00:06:01.491 --rc genhtml_function_coverage=1 00:06:01.491 --rc genhtml_legend=1 00:06:01.491 --rc geninfo_all_blocks=1 00:06:01.491 --rc geninfo_unexecuted_blocks=1 00:06:01.491 00:06:01.491 ' 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.491 --rc genhtml_branch_coverage=1 00:06:01.491 --rc genhtml_function_coverage=1 00:06:01.491 --rc genhtml_legend=1 00:06:01.491 --rc geninfo_all_blocks=1 00:06:01.491 --rc geninfo_unexecuted_blocks=1 00:06:01.491 00:06:01.491 ' 00:06:01.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.491 --rc genhtml_branch_coverage=1 00:06:01.491 --rc genhtml_function_coverage=1 00:06:01.491 --rc genhtml_legend=1 00:06:01.491 --rc geninfo_all_blocks=1 00:06:01.491 --rc geninfo_unexecuted_blocks=1 00:06:01.491 00:06:01.491 ' 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.491 --rc genhtml_branch_coverage=1 00:06:01.491 --rc genhtml_function_coverage=1 00:06:01.491 --rc genhtml_legend=1 00:06:01.491 --rc geninfo_all_blocks=1 00:06:01.491 --rc geninfo_unexecuted_blocks=1 00:06:01.491 00:06:01.491 ' 00:06:01.491 09:34:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.491 09:34:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59716 00:06:01.491 09:34:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59716 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59716 ']' 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.491 09:34:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.491 09:34:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.750 [2024-11-07 09:34:29.193287] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:01.750 [2024-11-07 09:34:29.193882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59716 ] 00:06:01.750 [2024-11-07 09:34:29.353693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.010 [2024-11-07 09:34:29.456404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.580 09:34:30 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.580 09:34:30 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:02.580 09:34:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:02.580 { 00:06:02.580 "version": "SPDK v25.01-pre git sha1 899af6c35", 00:06:02.580 "fields": { 00:06:02.580 "major": 25, 00:06:02.580 "minor": 1, 00:06:02.580 "patch": 0, 00:06:02.580 "suffix": "-pre", 00:06:02.580 "commit": "899af6c35" 00:06:02.580 } 00:06:02.580 } 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.839 09:34:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:02.839 09:34:30 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.839 request: 00:06:02.839 { 00:06:02.839 "method": "env_dpdk_get_mem_stats", 00:06:02.839 "req_id": 1 00:06:02.839 } 00:06:02.839 Got JSON-RPC error response 00:06:02.839 response: 00:06:02.839 { 00:06:02.839 "code": -32601, 00:06:02.839 "message": "Method not found" 00:06:02.839 } 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.099 09:34:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59716 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59716 ']' 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59716 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59716 00:06:03.099 killing process with pid 59716 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59716' 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@971 -- # kill 59716 00:06:03.099 09:34:30 app_cmdline -- common/autotest_common.sh@976 -- # wait 59716 00:06:04.475 ************************************ 00:06:04.475 END TEST app_cmdline 00:06:04.475 ************************************ 00:06:04.476 00:06:04.476 real 0m3.064s 00:06:04.476 user 0m3.379s 00:06:04.476 sys 0m0.461s 00:06:04.476 09:34:32 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.476 09:34:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.476 09:34:32 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.476 09:34:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.476 09:34:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.476 09:34:32 -- common/autotest_common.sh@10 -- # set +x 00:06:04.476 ************************************ 00:06:04.476 START TEST version 00:06:04.476 ************************************ 00:06:04.476 09:34:32 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.476 * Looking for test storage... 00:06:04.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.476 09:34:32 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.476 09:34:32 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.476 09:34:32 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.733 09:34:32 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.733 09:34:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.733 09:34:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.733 09:34:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.733 09:34:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.733 09:34:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.733 09:34:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.733 09:34:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.733 09:34:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.733 09:34:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.733 09:34:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.733 09:34:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.733 09:34:32 version -- scripts/common.sh@344 -- # case "$op" in 00:06:04.733 09:34:32 version -- scripts/common.sh@345 -- # : 1 00:06:04.733 09:34:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.733 09:34:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.733 09:34:32 version -- scripts/common.sh@365 -- # decimal 1 00:06:04.733 09:34:32 version -- scripts/common.sh@353 -- # local d=1 00:06:04.733 09:34:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.733 09:34:32 version -- scripts/common.sh@355 -- # echo 1 00:06:04.733 09:34:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.733 09:34:32 version -- scripts/common.sh@366 -- # decimal 2 00:06:04.733 09:34:32 version -- scripts/common.sh@353 -- # local d=2 00:06:04.733 09:34:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.733 09:34:32 version -- scripts/common.sh@355 -- # echo 2 00:06:04.733 09:34:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.733 09:34:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.733 09:34:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.733 09:34:32 version -- scripts/common.sh@368 -- # return 0 00:06:04.734 09:34:32 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.734 09:34:32 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.734 --rc genhtml_branch_coverage=1 00:06:04.734 --rc genhtml_function_coverage=1 00:06:04.734 --rc genhtml_legend=1 00:06:04.734 --rc geninfo_all_blocks=1 00:06:04.734 --rc geninfo_unexecuted_blocks=1 00:06:04.734 00:06:04.734 ' 00:06:04.734 09:34:32 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.734 --rc genhtml_branch_coverage=1 00:06:04.734 --rc genhtml_function_coverage=1 00:06:04.734 --rc genhtml_legend=1 00:06:04.734 --rc geninfo_all_blocks=1 00:06:04.734 --rc geninfo_unexecuted_blocks=1 00:06:04.734 00:06:04.734 ' 00:06:04.734 09:34:32 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.734 --rc genhtml_branch_coverage=1 00:06:04.734 --rc genhtml_function_coverage=1 00:06:04.734 --rc genhtml_legend=1 00:06:04.734 --rc geninfo_all_blocks=1 00:06:04.734 --rc geninfo_unexecuted_blocks=1 00:06:04.734 00:06:04.734 ' 00:06:04.734 09:34:32 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.734 --rc genhtml_branch_coverage=1 00:06:04.734 --rc genhtml_function_coverage=1 00:06:04.734 --rc genhtml_legend=1 00:06:04.734 --rc geninfo_all_blocks=1 00:06:04.734 --rc geninfo_unexecuted_blocks=1 00:06:04.734 00:06:04.734 ' 00:06:04.734 09:34:32 version -- app/version.sh@17 -- # get_header_version major 00:06:04.734 09:34:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # cut -f2 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.734 09:34:32 version -- app/version.sh@17 -- # major=25 00:06:04.734 09:34:32 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # cut -f2 00:06:04.734 09:34:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.734 09:34:32 version -- app/version.sh@18 -- # minor=1 00:06:04.734 09:34:32 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # cut -f2 00:06:04.734 09:34:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.734 09:34:32 version -- app/version.sh@19 -- # patch=0 00:06:04.734 09:34:32 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # cut -f2 00:06:04.734 09:34:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.734 09:34:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.734 09:34:32 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.734 09:34:32 version -- app/version.sh@22 -- # version=25.1 00:06:04.734 09:34:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.734 09:34:32 version -- app/version.sh@28 -- # version=25.1rc0 00:06:04.734 09:34:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:04.734 09:34:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.734 09:34:32 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:04.734 09:34:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:04.734 00:06:04.734 real 0m0.199s 00:06:04.734 user 0m0.125s 00:06:04.734 sys 0m0.100s 00:06:04.734 09:34:32 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.734 09:34:32 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.734 ************************************ 00:06:04.734 END TEST version 00:06:04.734 ************************************ 00:06:04.734 09:34:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:04.734 09:34:32 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:04.734 09:34:32 -- spdk/autotest.sh@194 -- # uname -s 00:06:04.734 09:34:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:04.734 09:34:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.734 09:34:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.734 09:34:32 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:04.734 09:34:32 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:04.734 09:34:32 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:04.734 09:34:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.734 09:34:32 -- common/autotest_common.sh@10 -- # set +x 00:06:04.734 ************************************ 00:06:04.734 START TEST blockdev_nvme 00:06:04.734 ************************************ 00:06:04.734 09:34:32 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:04.734 * Looking for test storage... 00:06:04.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:04.734 09:34:32 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.734 09:34:32 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.734 09:34:32 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.991 09:34:32 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.991 09:34:32 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.991 09:34:32 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.991 09:34:32 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.991 09:34:32 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.991 09:34:32 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.991 09:34:32 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.992 09:34:32 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.992 --rc genhtml_branch_coverage=1 00:06:04.992 --rc genhtml_function_coverage=1 00:06:04.992 --rc genhtml_legend=1 00:06:04.992 --rc geninfo_all_blocks=1 00:06:04.992 --rc geninfo_unexecuted_blocks=1 00:06:04.992 00:06:04.992 ' 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.992 --rc genhtml_branch_coverage=1 00:06:04.992 --rc genhtml_function_coverage=1 00:06:04.992 --rc genhtml_legend=1 00:06:04.992 --rc geninfo_all_blocks=1 00:06:04.992 --rc geninfo_unexecuted_blocks=1 00:06:04.992 00:06:04.992 ' 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.992 --rc genhtml_branch_coverage=1 00:06:04.992 --rc genhtml_function_coverage=1 00:06:04.992 --rc genhtml_legend=1 00:06:04.992 --rc geninfo_all_blocks=1 00:06:04.992 --rc geninfo_unexecuted_blocks=1 00:06:04.992 00:06:04.992 ' 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.992 --rc genhtml_branch_coverage=1 00:06:04.992 --rc genhtml_function_coverage=1 00:06:04.992 --rc genhtml_legend=1 00:06:04.992 --rc geninfo_all_blocks=1 00:06:04.992 --rc geninfo_unexecuted_blocks=1 00:06:04.992 00:06:04.992 ' 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:04.992 09:34:32 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:04.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59888 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59888 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59888 ']' 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.992 09:34:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.992 09:34:32 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:04.992 [2024-11-07 09:34:32.543257] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:04.992 [2024-11-07 09:34:32.543389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:06:05.249 [2024-11-07 09:34:32.701027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.249 [2024-11-07 09:34:32.799445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.815 09:34:33 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.815 09:34:33 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:06:05.815 09:34:33 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:05.815 09:34:33 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:05.815 09:34:33 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:05.815 09:34:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:05.815 09:34:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:05.815 09:34:33 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:05.815 09:34:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.815 09:34:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.073 09:34:33 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.073 09:34:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:06.073 09:34:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.073 09:34:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.073 09:34:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.332 09:34:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.332 09:34:33 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:06.332 09:34:33 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.332 09:34:33 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:06.332 09:34:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.332 09:34:33 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:06.332 09:34:33 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:06.333 09:34:33 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5f216fb6-131b-44e5-bdff-6f2b3a73a1e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5f216fb6-131b-44e5-bdff-6f2b3a73a1e7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8c8afd5b-dc30-44f2-b83f-fa804e60fc14"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8c8afd5b-dc30-44f2-b83f-fa804e60fc14",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4b08d5cc-f08e-4459-a7a8-f12a9fb8c489"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b08d5cc-f08e-4459-a7a8-f12a9fb8c489",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6aca8c5c-4f17-462f-aab7-60d4ac4e6f63"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6aca8c5c-4f17-462f-aab7-60d4ac4e6f63",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c3626d9b-b730-4fbe-ad47-666719fc657c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c3626d9b-b730-4fbe-ad47-666719fc657c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "64c91d1f-0001-49ff-ba47-4f9d35cb8b49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "64c91d1f-0001-49ff-ba47-4f9d35cb8b49",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:06.333 09:34:33 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:06.333 09:34:33 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:06.333 09:34:33 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:06.333 09:34:33 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59888 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59888 ']' 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59888 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59888 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.333 killing process with pid 59888 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59888' 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59888 00:06:06.333 09:34:33 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59888 00:06:07.736 09:34:35 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:07.736 09:34:35 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:07.736 09:34:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:07.736 09:34:35 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.736 09:34:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.736 ************************************ 00:06:07.736 START TEST bdev_hello_world 00:06:07.736 ************************************ 00:06:07.736 09:34:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:07.995 [2024-11-07 09:34:35.426931] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:07.995 [2024-11-07 09:34:35.427056] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59972 ] 00:06:07.995 [2024-11-07 09:34:35.589148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.253 [2024-11-07 09:34:35.691555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.820 [2024-11-07 09:34:36.231484] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:08.820 [2024-11-07 09:34:36.231537] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:08.820 [2024-11-07 09:34:36.231559] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:08.820 [2024-11-07 09:34:36.234112] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:08.820 [2024-11-07 09:34:36.234740] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:08.820 [2024-11-07 09:34:36.234769] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:08.820 [2024-11-07 09:34:36.234915] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:08.820 00:06:08.820 [2024-11-07 09:34:36.234935] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:09.386 00:06:09.386 real 0m1.578s 00:06:09.387 user 0m1.291s 00:06:09.387 sys 0m0.181s 00:06:09.387 ************************************ 00:06:09.387 END TEST bdev_hello_world 00:06:09.387 ************************************ 00:06:09.387 09:34:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.387 09:34:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:09.387 09:34:36 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:09.387 09:34:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:09.387 09:34:36 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.387 09:34:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.387 ************************************ 00:06:09.387 START TEST bdev_bounds 00:06:09.387 ************************************ 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60008 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60008' 00:06:09.387 Process bdevio pid: 60008 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60008 00:06:09.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 60008 ']' 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.387 09:34:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:09.387 [2024-11-07 09:34:37.042798] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:09.387 [2024-11-07 09:34:37.042920] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:06:09.645 [2024-11-07 09:34:37.199079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.645 [2024-11-07 09:34:37.301194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.645 [2024-11-07 09:34:37.301351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.645 [2024-11-07 09:34:37.301369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.581 09:34:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.581 09:34:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:10.581 09:34:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:10.581 I/O targets: 00:06:10.581 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:10.581 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:10.581 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:10.581 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:10.581 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:10.581 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:10.581 00:06:10.581 00:06:10.581 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.581 http://cunit.sourceforge.net/ 00:06:10.581 00:06:10.581 00:06:10.581 Suite: bdevio tests on: Nvme3n1 00:06:10.581 Test: blockdev write read block ...passed 00:06:10.581 Test: blockdev write zeroes read block ...passed 00:06:10.581 Test: blockdev write zeroes read no split ...passed 00:06:10.581 Test: blockdev write zeroes read split ...passed 00:06:10.581 Test: blockdev write zeroes read split partial ...passed 00:06:10.581 Test: blockdev reset ...[2024-11-07 09:34:38.037427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:10.581 passed 00:06:10.581 Test: blockdev write read 8 blocks ...[2024-11-07 09:34:38.040303] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:10.581 passed 00:06:10.581 Test: blockdev write read size > 128k ...passed 00:06:10.581 Test: blockdev write read invalid size ...passed 00:06:10.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.581 Test: blockdev write read max offset ...passed 00:06:10.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.581 Test: blockdev writev readv 8 blocks ...passed 00:06:10.581 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.581 Test: blockdev writev readv block ...passed 00:06:10.581 Test: blockdev writev readv size > 128k ...passed 00:06:10.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.581 Test: blockdev comparev and writev ...[2024-11-07 09:34:38.048096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aa00a000 len:0x1000 00:06:10.581 [2024-11-07 09:34:38.048144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.581 passed 00:06:10.581 Test: blockdev nvme passthru rw ...passed 00:06:10.581 Test: blockdev nvme passthru vendor specific ...passed 00:06:10.581 Test: blockdev nvme admin passthru ...[2024-11-07 09:34:38.048940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:10.581 [2024-11-07 09:34:38.048974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.581 passed 00:06:10.581 Test: blockdev copy ...passed 00:06:10.581 Suite: bdevio tests on: Nvme2n3 00:06:10.581 Test: blockdev write read block ...passed 00:06:10.581 Test: blockdev write zeroes read block ...passed 00:06:10.581 Test: blockdev write zeroes read no split ...passed 00:06:10.581 Test: blockdev write zeroes read split ...passed 00:06:10.581 Test: blockdev write zeroes read split partial ...passed 00:06:10.581 Test: blockdev reset ...[2024-11-07 09:34:38.107423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:10.581 [2024-11-07 09:34:38.110799] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:10.581 passed 00:06:10.581 Test: blockdev write read 8 blocks ...passed 00:06:10.581 Test: blockdev write read size > 128k ...passed 00:06:10.581 Test: blockdev write read invalid size ...passed 00:06:10.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.581 Test: blockdev write read max offset ...passed 00:06:10.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.581 Test: blockdev writev readv 8 blocks ...passed 00:06:10.581 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.581 Test: blockdev writev readv block ...passed 00:06:10.581 Test: blockdev writev readv size > 128k ...passed 00:06:10.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.581 Test: blockdev comparev and writev ...[2024-11-07 09:34:38.118676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae406000 len:0x1000 00:06:10.581 [2024-11-07 09:34:38.118822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.581 passed 00:06:10.581 Test: blockdev nvme passthru rw ...passed 00:06:10.581 Test: blockdev nvme passthru vendor specific ...[2024-11-07 09:34:38.119845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:10.581 [2024-11-07 09:34:38.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.581 passed 00:06:10.581 Test: blockdev nvme admin passthru ...passed 00:06:10.581 Test: blockdev copy ...passed 00:06:10.581 Suite: bdevio tests on: Nvme2n2 00:06:10.581 Test: blockdev write read block ...passed 00:06:10.581 Test: blockdev write zeroes read block ...passed 00:06:10.581 Test: blockdev write zeroes read no split ...passed 00:06:10.581 Test: blockdev write zeroes read split ...passed 00:06:10.581 Test: blockdev write zeroes read split partial ...passed 00:06:10.581 Test: blockdev reset ...[2024-11-07 09:34:38.174971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:10.581 [2024-11-07 09:34:38.178238] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:10.581 passed 00:06:10.581 Test: blockdev write read 8 blocks ...passed 00:06:10.581 Test: blockdev write read size > 128k ...passed 00:06:10.581 Test: blockdev write read invalid size ...passed 00:06:10.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.581 Test: blockdev write read max offset ...passed 00:06:10.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.581 Test: blockdev writev readv 8 blocks ...passed 00:06:10.581 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.581 Test: blockdev writev readv block ...passed 00:06:10.581 Test: blockdev writev readv size > 128k ...passed 00:06:10.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.582 Test: blockdev comparev and writev ...[2024-11-07 09:34:38.188322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc3c000 len:0x1000 00:06:10.582 [2024-11-07 09:34:38.188362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.582 passed 00:06:10.582 Test: blockdev nvme passthru rw ...passed 00:06:10.582 Test: blockdev nvme passthru vendor specific ...[2024-11-07 09:34:38.189028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:10.582 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:10.582 [2024-11-07 09:34:38.189128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.582 passed 00:06:10.582 Test: blockdev copy ...passed 00:06:10.582 Suite: bdevio tests on: Nvme2n1 00:06:10.582 Test: blockdev write read block ...passed 00:06:10.582 Test: blockdev write zeroes read block ...passed 00:06:10.582 Test: blockdev write zeroes read no split ...passed 00:06:10.582 Test: blockdev write zeroes read split ...passed 00:06:10.582 Test: blockdev write zeroes read split partial ...passed 00:06:10.582 Test: blockdev reset ...[2024-11-07 09:34:38.241932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:10.582 [2024-11-07 09:34:38.244878] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:10.582 passed 00:06:10.582 Test: blockdev write read 8 blocks ...passed 00:06:10.582 Test: blockdev write read size > 128k ...passed 00:06:10.582 Test: blockdev write read invalid size ...passed 00:06:10.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.582 Test: blockdev write read max offset ...passed 00:06:10.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.582 Test: blockdev writev readv 8 blocks ...passed 00:06:10.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.841 Test: blockdev writev readv block ...passed 00:06:10.841 Test: blockdev writev readv size > 128k ...passed 00:06:10.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.841 Test: blockdev comparev and writev ...[2024-11-07 09:34:38.252955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc38000 len:0x1000 00:06:10.841 [2024-11-07 09:34:38.252999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.841 passed 00:06:10.841 Test: blockdev nvme passthru rw ...passed 00:06:10.841 Test: blockdev nvme passthru vendor specific ...[2024-11-07 09:34:38.253596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:10.841 [2024-11-07 09:34:38.253625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.841 passed 00:06:10.841 Test: blockdev nvme admin passthru ...passed 00:06:10.841 Test: blockdev copy ...passed 00:06:10.841 Suite: bdevio tests on: Nvme1n1 00:06:10.841 Test: blockdev write read block ...passed 00:06:10.841 Test: blockdev write zeroes read block ...passed 00:06:10.841 Test: blockdev write zeroes read no split ...passed 00:06:10.841 Test: blockdev write zeroes read split ...passed 00:06:10.841 Test: blockdev write zeroes read split partial ...passed 00:06:10.841 Test: blockdev reset ...[2024-11-07 09:34:38.310346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:10.841 [2024-11-07 09:34:38.312976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:06:10.841 00:06:10.841 Test: blockdev write read 8 blocks ...passed 00:06:10.841 Test: blockdev write read size > 128k ...passed 00:06:10.841 Test: blockdev write read invalid size ...passed 00:06:10.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.841 Test: blockdev write read max offset ...passed 00:06:10.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.841 Test: blockdev writev readv 8 blocks ...passed 00:06:10.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.841 Test: blockdev writev readv block ...passed 00:06:10.841 Test: blockdev writev readv size > 128k ...passed 00:06:10.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.841 Test: blockdev comparev and writev ...[2024-11-07 09:34:38.320375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:10.841 Test: blockdev nvme passthru rw ...passed 00:06:10.841 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2bfc34000 len:0x1000 00:06:10.841 [2024-11-07 09:34:38.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:10.841 passed 00:06:10.841 Test: blockdev nvme admin passthru ...[2024-11-07 09:34:38.321069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:10.841 [2024-11-07 09:34:38.321101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:10.841 passed 00:06:10.841 Test: blockdev copy ...passed 00:06:10.841 Suite: bdevio tests on: Nvme0n1 00:06:10.841 Test: blockdev write read block ...passed 00:06:10.841 Test: blockdev write zeroes read block ...passed 00:06:10.841 Test: blockdev write zeroes read no split ...passed 00:06:10.841 Test: blockdev write zeroes read split ...passed 00:06:10.841 Test: blockdev write zeroes read split partial ...passed 00:06:10.841 Test: blockdev reset ...[2024-11-07 09:34:38.379804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:10.841 [2024-11-07 09:34:38.382585] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:10.841 passed 00:06:10.841 Test: blockdev write read 8 blocks ...passed 00:06:10.841 Test: blockdev write read size > 128k ...passed 00:06:10.841 Test: blockdev write read invalid size ...passed 00:06:10.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:10.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:10.841 Test: blockdev write read max offset ...passed 00:06:10.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:10.841 Test: blockdev writev readv 8 blocks ...passed 00:06:10.841 Test: blockdev writev readv 30 x 1block ...passed 00:06:10.841 Test: blockdev writev readv block ...passed 00:06:10.841 Test: blockdev writev readv size > 128k ...passed 00:06:10.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:10.841 Test: blockdev comparev and writev ...passed 00:06:10.841 Test: blockdev nvme passthru rw ...[2024-11-07 09:34:38.390458] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:10.841 separate metadata which is not supported yet. 00:06:10.841 passed 00:06:10.841 Test: blockdev nvme passthru vendor specific ...[2024-11-07 09:34:38.391115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:10.841 [2024-11-07 09:34:38.391262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:10.841 passed 00:06:10.841 Test: blockdev nvme admin passthru ...passed 00:06:10.841 Test: blockdev copy ...passed 00:06:10.841 00:06:10.841 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.842 suites 6 6 n/a 0 0 00:06:10.842 tests 138 138 138 0 0 00:06:10.842 asserts 893 893 893 0 n/a 00:06:10.842 00:06:10.842 Elapsed time = 1.064 seconds 00:06:10.842 0 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60008 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 60008 ']' 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 60008 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60008 00:06:10.842 killing process with pid 60008 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60008' 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 60008 00:06:10.842 09:34:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 60008 00:06:11.778 ************************************ 00:06:11.778 END TEST bdev_bounds 00:06:11.778 ************************************ 00:06:11.778 09:34:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:11.778 00:06:11.778 real 0m2.113s 00:06:11.778 user 0m5.360s 00:06:11.778 sys 0m0.292s 00:06:11.778 09:34:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.778 09:34:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:11.778 09:34:39 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:11.778 09:34:39 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:11.778 09:34:39 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.778 09:34:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.778 ************************************ 00:06:11.778 START TEST bdev_nbd 00:06:11.778 ************************************ 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60062 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60062 /var/tmp/spdk-nbd.sock 00:06:11.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 60062 ']' 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:11.778 09:34:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:11.778 [2024-11-07 09:34:39.208927] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:11.778 [2024-11-07 09:34:39.209039] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.778 [2024-11-07 09:34:39.369813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.037 [2024-11-07 09:34:39.469006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:12.602 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.860 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.861 1+0 records in 00:06:12.861 1+0 records out 00:06:12.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355597 s, 11.5 MB/s 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.861 1+0 records in 00:06:12.861 1+0 records out 00:06:12.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511182 s, 8.0 MB/s 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.861 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.119 1+0 records in 00:06:13.119 1+0 records out 00:06:13.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441176 s, 9.3 MB/s 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.119 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.378 1+0 records in 00:06:13.378 1+0 records out 00:06:13.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391505 s, 10.5 MB/s 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.378 09:34:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.637 1+0 records in 00:06:13.637 1+0 records out 00:06:13.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624077 s, 6.6 MB/s 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.637 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:13.895 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.895 1+0 records in 00:06:13.895 1+0 records out 00:06:13.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635583 s, 6.4 MB/s 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.896 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd0", 00:06:14.154 "bdev_name": "Nvme0n1" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd1", 00:06:14.154 "bdev_name": "Nvme1n1" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd2", 00:06:14.154 "bdev_name": "Nvme2n1" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd3", 00:06:14.154 "bdev_name": "Nvme2n2" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd4", 00:06:14.154 "bdev_name": "Nvme2n3" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd5", 00:06:14.154 "bdev_name": "Nvme3n1" 00:06:14.154 } 00:06:14.154 ]' 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd0", 00:06:14.154 "bdev_name": "Nvme0n1" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd1", 00:06:14.154 "bdev_name": "Nvme1n1" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd2", 00:06:14.154 "bdev_name": "Nvme2n1" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd3", 00:06:14.154 "bdev_name": "Nvme2n2" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd4", 00:06:14.154 "bdev_name": "Nvme2n3" 00:06:14.154 }, 00:06:14.154 { 00:06:14.154 "nbd_device": "/dev/nbd5", 00:06:14.154 "bdev_name": "Nvme3n1" 00:06:14.154 } 00:06:14.154 ]' 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.154 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.421 09:34:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.714 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.973 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.232 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.490 09:34:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.748 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:15.748 /dev/nbd0 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.007 1+0 records in 00:06:16.007 1+0 records out 00:06:16.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412434 s, 9.9 MB/s 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:16.007 /dev/nbd1 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.007 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.008 1+0 records in 00:06:16.008 1+0 records out 00:06:16.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535255 s, 7.7 MB/s 00:06:16.008 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:16.266 /dev/nbd10 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.266 1+0 records in 00:06:16.266 1+0 records out 00:06:16.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561737 s, 7.3 MB/s 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.266 09:34:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:16.525 /dev/nbd11 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.525 1+0 records in 00:06:16.525 1+0 records out 00:06:16.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505113 s, 8.1 MB/s 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.525 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:16.783 /dev/nbd12 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.783 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.784 1+0 records in 00:06:16.784 1+0 records out 00:06:16.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620405 s, 6.6 MB/s 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.784 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:17.042 /dev/nbd13 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:17.042 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.043 1+0 records in 00:06:17.043 1+0 records out 00:06:17.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496603 s, 8.2 MB/s 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.043 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd0", 00:06:17.301 "bdev_name": "Nvme0n1" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd1", 00:06:17.301 "bdev_name": "Nvme1n1" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd10", 00:06:17.301 "bdev_name": "Nvme2n1" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd11", 00:06:17.301 "bdev_name": "Nvme2n2" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd12", 00:06:17.301 "bdev_name": "Nvme2n3" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd13", 00:06:17.301 "bdev_name": "Nvme3n1" 00:06:17.301 } 00:06:17.301 ]' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd0", 00:06:17.301 "bdev_name": "Nvme0n1" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd1", 00:06:17.301 "bdev_name": "Nvme1n1" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd10", 00:06:17.301 "bdev_name": "Nvme2n1" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd11", 00:06:17.301 "bdev_name": "Nvme2n2" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd12", 00:06:17.301 "bdev_name": "Nvme2n3" 00:06:17.301 }, 00:06:17.301 { 00:06:17.301 "nbd_device": "/dev/nbd13", 00:06:17.301 "bdev_name": "Nvme3n1" 00:06:17.301 } 00:06:17.301 ]' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.301 /dev/nbd1 00:06:17.301 /dev/nbd10 00:06:17.301 /dev/nbd11 00:06:17.301 /dev/nbd12 00:06:17.301 /dev/nbd13' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.301 /dev/nbd1 00:06:17.301 /dev/nbd10 00:06:17.301 /dev/nbd11 00:06:17.301 /dev/nbd12 00:06:17.301 /dev/nbd13' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:17.301 256+0 records in 00:06:17.301 256+0 records out 00:06:17.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718264 s, 146 MB/s 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.301 256+0 records in 00:06:17.301 256+0 records out 00:06:17.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0632861 s, 16.6 MB/s 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.301 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.559 256+0 records in 00:06:17.559 256+0 records out 00:06:17.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0635355 s, 16.5 MB/s 00:06:17.559 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.559 09:34:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:17.559 256+0 records in 00:06:17.559 256+0 records out 00:06:17.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.076515 s, 13.7 MB/s 00:06:17.559 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.559 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:17.559 256+0 records in 00:06:17.559 256+0 records out 00:06:17.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0828544 s, 12.7 MB/s 00:06:17.559 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.559 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:17.559 256+0 records in 00:06:17.559 256+0 records out 00:06:17.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0640635 s, 16.4 MB/s 00:06:17.559 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.559 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:17.818 256+0 records in 00:06:17.818 256+0 records out 00:06:17.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.058485 s, 17.9 MB/s 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.818 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.077 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.335 09:34:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.593 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.851 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:19.110 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:19.368 malloc_lvol_verify 00:06:19.368 09:34:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:19.626 893f4557-4dcc-42f9-850d-d97ea70075a4 00:06:19.626 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:19.884 4ae2a9ba-035e-42cb-b2bd-600e8a479038 00:06:19.884 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:19.884 /dev/nbd0 00:06:19.884 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:19.884 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:19.884 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:19.884 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:19.884 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:20.142 mke2fs 1.47.0 (5-Feb-2023) 00:06:20.142 Discarding device blocks: 0/4096 done 00:06:20.142 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:20.142 00:06:20.142 Allocating group tables: 0/1 done 00:06:20.142 Writing inode tables: 0/1 done 00:06:20.142 Creating journal (1024 blocks): done 00:06:20.142 Writing superblocks and filesystem accounting information: 0/1 done 00:06:20.142 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60062 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 60062 ']' 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 60062 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60062 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.142 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60062' 00:06:20.143 killing process with pid 60062 00:06:20.143 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 60062 00:06:20.143 09:34:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 60062 00:06:21.075 09:34:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:21.075 00:06:21.075 real 0m9.296s 00:06:21.075 user 0m13.387s 00:06:21.075 sys 0m2.954s 00:06:21.075 ************************************ 00:06:21.075 END TEST bdev_nbd 00:06:21.075 ************************************ 00:06:21.075 09:34:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.075 09:34:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 09:34:48 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:21.075 09:34:48 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:21.075 skipping fio tests on NVMe due to multi-ns failures. 00:06:21.075 09:34:48 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:21.075 09:34:48 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:21.075 09:34:48 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:21.075 09:34:48 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:21.075 09:34:48 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.075 09:34:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 ************************************ 00:06:21.075 START TEST bdev_verify 00:06:21.075 ************************************ 00:06:21.075 09:34:48 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:21.075 [2024-11-07 09:34:48.536993] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:21.075 [2024-11-07 09:34:48.537084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60436 ] 00:06:21.075 [2024-11-07 09:34:48.681020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.333 [2024-11-07 09:34:48.764328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.333 [2024-11-07 09:34:48.764391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.898 Running I/O for 5 seconds... 00:06:23.777 25600.00 IOPS, 100.00 MiB/s [2024-11-07T09:34:52.821Z] 25536.00 IOPS, 99.75 MiB/s [2024-11-07T09:34:53.758Z] 26368.00 IOPS, 103.00 MiB/s [2024-11-07T09:34:54.698Z] 26160.00 IOPS, 102.19 MiB/s [2024-11-07T09:34:54.698Z] 25340.60 IOPS, 98.99 MiB/s 00:06:27.028 Latency(us) 00:06:27.028 [2024-11-07T09:34:54.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.028 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x0 length 0xbd0bd 00:06:27.028 Nvme0n1 : 5.06 2050.28 8.01 0.00 0.00 62211.80 7612.26 72997.02 00:06:27.028 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:27.028 Nvme0n1 : 5.06 2126.35 8.31 0.00 0.00 60008.24 13107.20 79046.50 00:06:27.028 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x0 length 0xa0000 00:06:27.028 Nvme1n1 : 5.06 2048.17 8.00 0.00 0.00 62194.58 16232.76 65737.65 00:06:27.028 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0xa0000 length 0xa0000 00:06:27.028 Nvme1n1 : 5.06 2124.84 8.30 0.00 0.00 59929.80 15022.87 74610.22 00:06:27.028 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x0 length 0x80000 00:06:27.028 Nvme2n1 : 5.07 2046.95 8.00 0.00 0.00 62108.32 17341.83 60494.77 00:06:27.028 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x80000 length 0x80000 00:06:27.028 Nvme2n1 : 5.06 2124.21 8.30 0.00 0.00 59830.90 16535.24 70980.53 00:06:27.028 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x0 length 0x80000 00:06:27.028 Nvme2n2 : 5.07 2045.76 7.99 0.00 0.00 62014.43 17745.13 59688.17 00:06:27.028 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x80000 length 0x80000 00:06:27.028 Nvme2n2 : 5.06 2122.95 8.29 0.00 0.00 59742.87 16938.54 72190.42 00:06:27.028 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x0 length 0x80000 00:06:27.028 Nvme2n3 : 5.07 2045.21 7.99 0.00 0.00 61910.01 16232.76 63721.16 00:06:27.028 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x80000 length 0x80000 00:06:27.028 Nvme2n3 : 5.08 2130.52 8.32 0.00 0.00 59466.45 5394.12 78239.90 00:06:27.028 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x0 length 0x20000 00:06:27.028 Nvme3n1 : 5.08 2054.75 8.03 0.00 0.00 61588.07 2684.46 68157.44 00:06:27.028 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.028 Verification LBA range: start 0x20000 length 0x20000 00:06:27.028 Nvme3n1 : 5.08 2129.24 8.32 0.00 0.00 59358.93 7461.02 80256.39 00:06:27.028 [2024-11-07T09:34:54.699Z] =================================================================================================================== 00:06:27.028 [2024-11-07T09:34:54.699Z] Total : 25049.23 97.85 0.00 0.00 60841.89 2684.46 80256.39 00:06:31.241 ************************************ 00:06:31.241 END TEST bdev_verify 00:06:31.241 ************************************ 00:06:31.241 00:06:31.241 real 0m10.377s 00:06:31.241 user 0m19.832s 00:06:31.241 sys 0m0.245s 00:06:31.241 09:34:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.241 09:34:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.505 09:34:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.505 09:34:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:31.505 09:34:58 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.505 09:34:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.505 ************************************ 00:06:31.505 START TEST bdev_verify_big_io 00:06:31.505 ************************************ 00:06:31.505 09:34:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.505 [2024-11-07 09:34:59.022218] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:31.505 [2024-11-07 09:34:59.022388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60531 ] 00:06:31.766 [2024-11-07 09:34:59.182152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.766 [2024-11-07 09:34:59.299298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.766 [2024-11-07 09:34:59.299385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.338 Running I/O for 5 seconds... 00:06:37.460 1832.00 IOPS, 114.50 MiB/s [2024-11-07T09:35:06.065Z] 2553.00 IOPS, 159.56 MiB/s [2024-11-07T09:35:06.632Z] 2696.67 IOPS, 168.54 MiB/s 00:06:38.961 Latency(us) 00:06:38.961 [2024-11-07T09:35:06.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.961 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x0 length 0xbd0b 00:06:38.961 Nvme0n1 : 5.81 88.17 5.51 0.00 0.00 1408033.97 18047.61 1613193.85 00:06:38.961 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:38.961 Nvme0n1 : 5.62 155.70 9.73 0.00 0.00 794373.00 17442.66 803370.54 00:06:38.961 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x0 length 0xa000 00:06:38.961 Nvme1n1 : 5.81 84.88 5.30 0.00 0.00 1378420.87 22282.24 1361535.61 00:06:38.961 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0xa000 length 0xa000 00:06:38.961 Nvme1n1 : 5.62 154.96 9.69 0.00 0.00 776983.66 79449.80 735616.39 00:06:38.961 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x0 length 0x8000 00:06:38.961 Nvme2n1 : 5.81 88.12 5.51 0.00 0.00 1263782.99 47992.52 1387346.71 00:06:38.961 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x8000 length 0x8000 00:06:38.961 Nvme2n1 : 5.62 159.35 9.96 0.00 0.00 746482.05 83079.48 742069.17 00:06:38.961 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x0 length 0x8000 00:06:38.961 Nvme2n2 : 5.92 105.02 6.56 0.00 0.00 1022009.52 16535.24 1406705.03 00:06:38.961 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x8000 length 0x8000 00:06:38.961 Nvme2n2 : 5.66 161.91 10.12 0.00 0.00 717231.75 32465.53 758201.11 00:06:38.961 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x0 length 0x8000 00:06:38.961 Nvme2n3 : 6.12 156.94 9.81 0.00 0.00 653768.55 6604.01 1426063.36 00:06:38.961 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x8000 length 0x8000 00:06:38.961 Nvme2n3 : 5.69 168.86 10.55 0.00 0.00 674133.02 25206.15 767880.27 00:06:38.961 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x0 length 0x2000 00:06:38.961 Nvme3n1 : 6.36 278.71 17.42 0.00 0.00 352116.56 494.67 1445421.69 00:06:38.961 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.961 Verification LBA range: start 0x2000 length 0x2000 00:06:38.961 Nvme3n1 : 5.73 184.28 11.52 0.00 0.00 604776.33 1890.46 864671.90 00:06:38.961 [2024-11-07T09:35:06.632Z] =================================================================================================================== 00:06:38.961 [2024-11-07T09:35:06.632Z] Total : 1786.91 111.68 0.00 0.00 758013.51 494.67 1613193.85 00:06:41.543 00:06:41.543 real 0m9.860s 00:06:41.543 user 0m18.695s 00:06:41.543 sys 0m0.277s 00:06:41.543 ************************************ 00:06:41.543 END TEST bdev_verify_big_io 00:06:41.543 ************************************ 00:06:41.543 09:35:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.543 09:35:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:41.543 09:35:08 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.543 09:35:08 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:41.543 09:35:08 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.543 09:35:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.543 ************************************ 00:06:41.543 START TEST bdev_write_zeroes 00:06:41.543 ************************************ 00:06:41.543 09:35:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.543 [2024-11-07 09:35:08.908972] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:41.543 [2024-11-07 09:35:08.909283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:06:41.543 [2024-11-07 09:35:09.070502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.543 [2024-11-07 09:35:09.173921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.114 Running I/O for 1 seconds... 00:06:43.487 67260.00 IOPS, 262.73 MiB/s 00:06:43.487 Latency(us) 00:06:43.487 [2024-11-07T09:35:11.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.487 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.487 Nvme0n1 : 1.02 11146.49 43.54 0.00 0.00 11458.30 5142.06 56865.08 00:06:43.487 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.487 Nvme1n1 : 1.02 11199.69 43.75 0.00 0.00 11387.61 7914.73 39523.25 00:06:43.487 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.487 Nvme2n1 : 1.02 11123.76 43.45 0.00 0.00 11405.63 7813.91 45169.43 00:06:43.487 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.487 Nvme2n2 : 1.03 11110.24 43.40 0.00 0.00 11381.10 7763.50 45371.08 00:06:43.487 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.487 Nvme2n3 : 1.03 11097.05 43.35 0.00 0.00 11355.80 7713.08 43556.23 00:06:43.487 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.487 Nvme3n1 : 1.03 11084.49 43.30 0.00 0.00 11345.24 7612.26 43354.58 00:06:43.487 [2024-11-07T09:35:11.158Z] =================================================================================================================== 00:06:43.487 [2024-11-07T09:35:11.158Z] Total : 66761.74 260.79 0.00 0.00 11388.94 5142.06 56865.08 00:06:44.054 00:06:44.054 real 0m2.693s 00:06:44.054 user 0m2.360s 00:06:44.054 sys 0m0.216s 00:06:44.054 09:35:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.054 09:35:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:44.054 ************************************ 00:06:44.054 END TEST bdev_write_zeroes 00:06:44.054 ************************************ 00:06:44.054 09:35:11 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.054 09:35:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:44.054 09:35:11 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.054 09:35:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.054 ************************************ 00:06:44.054 START TEST bdev_json_nonenclosed 00:06:44.054 ************************************ 00:06:44.054 09:35:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.054 [2024-11-07 09:35:11.655355] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:44.054 [2024-11-07 09:35:11.655469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:06:44.312 [2024-11-07 09:35:11.816238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.312 [2024-11-07 09:35:11.913053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.312 [2024-11-07 09:35:11.913126] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:44.312 [2024-11-07 09:35:11.913144] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:44.312 [2024-11-07 09:35:11.913152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.571 ************************************ 00:06:44.571 END TEST bdev_json_nonenclosed 00:06:44.571 ************************************ 00:06:44.571 00:06:44.571 real 0m0.501s 00:06:44.571 user 0m0.311s 00:06:44.571 sys 0m0.086s 00:06:44.571 09:35:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.571 09:35:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:44.571 09:35:12 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.571 09:35:12 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:44.571 09:35:12 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.571 09:35:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.571 ************************************ 00:06:44.571 START TEST bdev_json_nonarray 00:06:44.571 ************************************ 00:06:44.571 09:35:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.571 [2024-11-07 09:35:12.208862] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:44.571 [2024-11-07 09:35:12.209080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60726 ] 00:06:44.829 [2024-11-07 09:35:12.369568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.829 [2024-11-07 09:35:12.465957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.829 [2024-11-07 09:35:12.466048] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:44.829 [2024-11-07 09:35:12.466065] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:44.829 [2024-11-07 09:35:12.466074] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.088 00:06:45.088 real 0m0.494s 00:06:45.088 user 0m0.297s 00:06:45.088 sys 0m0.094s 00:06:45.088 09:35:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.088 09:35:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:45.088 ************************************ 00:06:45.088 END TEST bdev_json_nonarray 00:06:45.088 ************************************ 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:45.088 09:35:12 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:45.088 00:06:45.088 real 0m40.375s 00:06:45.088 user 1m4.687s 00:06:45.088 sys 0m5.103s 00:06:45.088 ************************************ 00:06:45.088 END TEST blockdev_nvme 00:06:45.088 ************************************ 00:06:45.088 09:35:12 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.088 09:35:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.088 09:35:12 -- spdk/autotest.sh@209 -- # uname -s 00:06:45.088 09:35:12 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:45.088 09:35:12 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:45.088 09:35:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:45.088 09:35:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.088 09:35:12 -- common/autotest_common.sh@10 -- # set +x 00:06:45.088 ************************************ 00:06:45.088 START TEST blockdev_nvme_gpt 00:06:45.088 ************************************ 00:06:45.088 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:45.347 * Looking for test storage... 00:06:45.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.347 09:35:12 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.347 --rc genhtml_branch_coverage=1 00:06:45.347 --rc genhtml_function_coverage=1 00:06:45.347 --rc genhtml_legend=1 00:06:45.347 --rc geninfo_all_blocks=1 00:06:45.347 --rc geninfo_unexecuted_blocks=1 00:06:45.347 00:06:45.347 ' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.347 --rc genhtml_branch_coverage=1 00:06:45.347 --rc genhtml_function_coverage=1 00:06:45.347 --rc genhtml_legend=1 00:06:45.347 --rc geninfo_all_blocks=1 00:06:45.347 --rc geninfo_unexecuted_blocks=1 00:06:45.347 00:06:45.347 ' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.347 --rc genhtml_branch_coverage=1 00:06:45.347 --rc genhtml_function_coverage=1 00:06:45.347 --rc genhtml_legend=1 00:06:45.347 --rc geninfo_all_blocks=1 00:06:45.347 --rc geninfo_unexecuted_blocks=1 00:06:45.347 00:06:45.347 ' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.347 --rc genhtml_branch_coverage=1 00:06:45.347 --rc genhtml_function_coverage=1 00:06:45.347 --rc genhtml_legend=1 00:06:45.347 --rc geninfo_all_blocks=1 00:06:45.347 --rc geninfo_unexecuted_blocks=1 00:06:45.347 00:06:45.347 ' 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:45.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:45.347 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60810 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60810 00:06:45.348 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60810 ']' 00:06:45.348 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.348 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.348 09:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:45.348 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.348 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.348 09:35:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.348 [2024-11-07 09:35:12.954810] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:45.348 [2024-11-07 09:35:12.954930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60810 ] 00:06:45.606 [2024-11-07 09:35:13.115458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.606 [2024-11-07 09:35:13.212886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.172 09:35:13 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.172 09:35:13 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:06:46.172 09:35:13 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:46.172 09:35:13 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:46.172 09:35:13 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:46.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.696 Waiting for block devices as requested 00:06:46.696 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.696 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.953 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.953 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:52.215 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:52.215 BYT; 00:06:52.215 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:52.215 BYT; 00:06:52.215 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:52.215 09:35:19 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:52.215 09:35:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:53.146 The operation has completed successfully. 00:06:53.146 09:35:20 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:54.077 The operation has completed successfully. 00:06:54.077 09:35:21 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:54.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.899 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.899 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.158 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.158 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:55.158 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.158 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.158 [] 00:06:55.158 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.158 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:55.158 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:55.158 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:55.158 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:55.159 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:55.159 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.159 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.420 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.420 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:55.420 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.420 09:35:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.420 09:35:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.420 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.420 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:55.420 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.420 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.420 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.420 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:55.420 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:55.421 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ecfede6e-8ed9-4fe5-8de6-6957b0a596d9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ecfede6e-8ed9-4fe5-8de6-6957b0a596d9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7dd55680-9382-4247-8e08-e99436d9b60b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7dd55680-9382-4247-8e08-e99436d9b60b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d47a0b2d-ad40-43c4-92d8-f7ce3cd9ffde"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d47a0b2d-ad40-43c4-92d8-f7ce3cd9ffde",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a7b1f718-cccb-4c95-92b2-7a9ec07cb82f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a7b1f718-cccb-4c95-92b2-7a9ec07cb82f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "68fba386-9dbe-4237-b024-dea13899237e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "68fba386-9dbe-4237-b024-dea13899237e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:55.681 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:55.681 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:55.681 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:55.681 09:35:23 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60810 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60810 ']' 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60810 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60810 00:06:55.681 killing process with pid 60810 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60810' 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60810 00:06:55.681 09:35:23 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60810 00:06:57.594 09:35:24 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.594 09:35:24 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.594 09:35:24 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:57.594 09:35:24 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.594 09:35:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.594 ************************************ 00:06:57.594 START TEST bdev_hello_world 00:06:57.594 ************************************ 00:06:57.594 09:35:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.594 [2024-11-07 09:35:24.864967] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:57.594 [2024-11-07 09:35:24.865144] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61435 ] 00:06:57.594 [2024-11-07 09:35:25.034567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.594 [2024-11-07 09:35:25.157520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.166 [2024-11-07 09:35:25.746353] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.166 [2024-11-07 09:35:25.746421] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.166 [2024-11-07 09:35:25.746446] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.166 [2024-11-07 09:35:25.749255] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.166 [2024-11-07 09:35:25.750092] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.166 [2024-11-07 09:35:25.750268] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.166 [2024-11-07 09:35:25.750763] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.166 00:06:58.166 [2024-11-07 09:35:25.750798] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:59.113 ************************************ 00:06:59.113 END TEST bdev_hello_world 00:06:59.113 ************************************ 00:06:59.113 00:06:59.113 real 0m1.966s 00:06:59.113 user 0m1.574s 00:06:59.113 sys 0m0.280s 00:06:59.113 09:35:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.113 09:35:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:59.394 09:35:26 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:59.394 09:35:26 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:59.394 09:35:26 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.394 09:35:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.394 ************************************ 00:06:59.394 START TEST bdev_bounds 00:06:59.394 ************************************ 00:06:59.394 Process bdevio pid: 61473 00:06:59.394 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:59.394 09:35:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61473 00:06:59.394 09:35:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.394 09:35:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61473' 00:06:59.394 09:35:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61473 00:06:59.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.394 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61473 ']' 00:06:59.395 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.395 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.395 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.395 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.395 09:35:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.395 09:35:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.395 [2024-11-07 09:35:26.899135] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:59.395 [2024-11-07 09:35:26.899468] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61473 ] 00:06:59.667 [2024-11-07 09:35:27.061773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.667 [2024-11-07 09:35:27.195520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.667 [2024-11-07 09:35:27.195866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.667 [2024-11-07 09:35:27.195881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.239 09:35:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.239 09:35:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:00.239 09:35:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.501 I/O targets: 00:07:00.501 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.501 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:00.501 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:00.501 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.501 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.501 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.501 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.501 00:07:00.501 00:07:00.501 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.501 http://cunit.sourceforge.net/ 00:07:00.501 00:07:00.501 00:07:00.501 Suite: bdevio tests on: Nvme3n1 00:07:00.501 Test: blockdev write read block ...passed 00:07:00.501 Test: blockdev write zeroes read block ...passed 00:07:00.501 Test: blockdev write zeroes read no split ...passed 00:07:00.501 Test: blockdev write zeroes read split ...passed 00:07:00.501 Test: blockdev write zeroes read split partial ...passed 00:07:00.501 Test: blockdev reset ...[2024-11-07 09:35:27.981476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:00.501 passed 00:07:00.501 Test: blockdev write read 8 blocks ...[2024-11-07 09:35:27.988075] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:00.501 passed 00:07:00.501 Test: blockdev write read size > 128k ...passed 00:07:00.501 Test: blockdev write read invalid size ...passed 00:07:00.501 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.501 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.501 Test: blockdev write read max offset ...passed 00:07:00.501 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.501 Test: blockdev writev readv 8 blocks ...passed 00:07:00.501 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.501 Test: blockdev writev readv block ...passed 00:07:00.501 Test: blockdev writev readv size > 128k ...passed 00:07:00.501 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.501 Test: blockdev comparev and writev ...[2024-11-07 09:35:28.000020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7604000 len:0x1000 00:07:00.501 [2024-11-07 09:35:28.000125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.501 passed 00:07:00.501 Test: blockdev nvme passthru rw ...passed 00:07:00.501 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.501 Test: blockdev nvme admin passthru ...[2024-11-07 09:35:28.001304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.501 [2024-11-07 09:35:28.001363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.501 passed 00:07:00.501 Test: blockdev copy ...passed 00:07:00.501 Suite: bdevio tests on: Nvme2n3 00:07:00.501 Test: blockdev write read block ...passed 00:07:00.501 Test: blockdev write zeroes read block ...passed 00:07:00.501 Test: blockdev write zeroes read no split ...passed 00:07:00.501 Test: blockdev write zeroes read split ...passed 00:07:00.501 Test: blockdev write zeroes read split partial ...passed 00:07:00.501 Test: blockdev reset ...[2024-11-07 09:35:28.053366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.501 [2024-11-07 09:35:28.059294] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.501 passed 00:07:00.501 Test: blockdev write read 8 blocks ...passed 00:07:00.501 Test: blockdev write read size > 128k ...passed 00:07:00.501 Test: blockdev write read invalid size ...passed 00:07:00.501 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.501 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.501 Test: blockdev write read max offset ...passed 00:07:00.501 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.501 Test: blockdev writev readv 8 blocks ...passed 00:07:00.501 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.501 Test: blockdev writev readv block ...passed 00:07:00.501 Test: blockdev writev readv size > 128k ...passed 00:07:00.501 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.501 Test: blockdev comparev and writev ...[2024-11-07 09:35:28.073904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7602000 len:0x1000 00:07:00.501 [2024-11-07 09:35:28.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.501 passed 00:07:00.501 Test: blockdev nvme passthru rw ...passed 00:07:00.501 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.501 Test: blockdev nvme admin passthru ...[2024-11-07 09:35:28.076191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.501 [2024-11-07 09:35:28.076229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.501 passed 00:07:00.501 Test: blockdev copy ...passed 00:07:00.501 Suite: bdevio tests on: Nvme2n2 00:07:00.501 Test: blockdev write read block ...passed 00:07:00.501 Test: blockdev write zeroes read block ...passed 00:07:00.501 Test: blockdev write zeroes read no split ...passed 00:07:00.501 Test: blockdev write zeroes read split ...passed 00:07:00.501 Test: blockdev write zeroes read split partial ...passed 00:07:00.501 Test: blockdev reset ...[2024-11-07 09:35:28.133871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.501 [2024-11-07 09:35:28.137961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.501 passed 00:07:00.501 Test: blockdev write read 8 blocks ...passed 00:07:00.501 Test: blockdev write read size > 128k ...passed 00:07:00.501 Test: blockdev write read invalid size ...passed 00:07:00.501 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.501 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.501 Test: blockdev write read max offset ...passed 00:07:00.501 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.501 Test: blockdev writev readv 8 blocks ...passed 00:07:00.501 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.501 Test: blockdev writev readv block ...passed 00:07:00.501 Test: blockdev writev readv size > 128k ...passed 00:07:00.501 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.501 Test: blockdev comparev and writev ...[2024-11-07 09:35:28.150201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cee38000 len:0x1000 00:07:00.501 [2024-11-07 09:35:28.150244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.501 passed 00:07:00.501 Test: blockdev nvme passthru rw ...passed 00:07:00.501 Test: blockdev nvme passthru vendor specific ...[2024-11-07 09:35:28.151905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:00.501 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:00.501 [2024-11-07 09:35:28.152019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.501 passed 00:07:00.501 Test: blockdev copy ...passed 00:07:00.501 Suite: bdevio tests on: Nvme2n1 00:07:00.501 Test: blockdev write read block ...passed 00:07:00.501 Test: blockdev write zeroes read block ...passed 00:07:00.501 Test: blockdev write zeroes read no split ...passed 00:07:00.762 Test: blockdev write zeroes read split ...passed 00:07:00.762 Test: blockdev write zeroes read split partial ...passed 00:07:00.762 Test: blockdev reset ...[2024-11-07 09:35:28.205591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.762 [2024-11-07 09:35:28.209302] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:00.762 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.762 passed 00:07:00.762 Test: blockdev write read size > 128k ...passed 00:07:00.762 Test: blockdev write read invalid size ...passed 00:07:00.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.762 Test: blockdev write read max offset ...passed 00:07:00.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.762 Test: blockdev writev readv 8 blocks ...passed 00:07:00.762 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.762 Test: blockdev writev readv block ...passed 00:07:00.762 Test: blockdev writev readv size > 128k ...passed 00:07:00.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.762 Test: blockdev comparev and writev ...[2024-11-07 09:35:28.224783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cee34000 len:0x1000 00:07:00.762 [2024-11-07 09:35:28.224910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.762 passed 00:07:00.762 Test: blockdev nvme passthru rw ...passed 00:07:00.762 Test: blockdev nvme passthru vendor specific ...[2024-11-07 09:35:28.226491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.763 [2024-11-07 09:35:28.226598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.763 passed 00:07:00.763 Test: blockdev nvme admin passthru ...passed 00:07:00.763 Test: blockdev copy ...passed 00:07:00.763 Suite: bdevio tests on: Nvme1n1p2 00:07:00.763 Test: blockdev write read block ...passed 00:07:00.763 Test: blockdev write zeroes read block ...passed 00:07:00.763 Test: blockdev write zeroes read no split ...passed 00:07:00.763 Test: blockdev write zeroes read split ...passed 00:07:00.763 Test: blockdev write zeroes read split partial ...passed 00:07:00.763 Test: blockdev reset ...[2024-11-07 09:35:28.287299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.763 [2024-11-07 09:35:28.292401] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:00.763 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.763 passed 00:07:00.763 Test: blockdev write read size > 128k ...passed 00:07:00.763 Test: blockdev write read invalid size ...passed 00:07:00.763 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.763 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.763 Test: blockdev write read max offset ...passed 00:07:00.763 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.763 Test: blockdev writev readv 8 blocks ...passed 00:07:00.763 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.763 Test: blockdev writev readv block ...passed 00:07:00.763 Test: blockdev writev readv size > 128k ...passed 00:07:00.763 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.763 Test: blockdev comparev and writev ...[2024-11-07 09:35:28.303824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cee30000 len:0x1000 00:07:00.763 [2024-11-07 09:35:28.303862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.763 passed 00:07:00.763 Test: blockdev nvme passthru rw ...passed 00:07:00.763 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.763 Test: blockdev nvme admin passthru ...passed 00:07:00.763 Test: blockdev copy ...passed 00:07:00.763 Suite: bdevio tests on: Nvme1n1p1 00:07:00.763 Test: blockdev write read block ...passed 00:07:00.763 Test: blockdev write zeroes read block ...passed 00:07:00.763 Test: blockdev write zeroes read no split ...passed 00:07:00.763 Test: blockdev write zeroes read split ...passed 00:07:00.763 Test: blockdev write zeroes read split partial ...passed 00:07:00.763 Test: blockdev reset ...[2024-11-07 09:35:28.349367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.763 [2024-11-07 09:35:28.353992] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:00.763 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.763 passed 00:07:00.763 Test: blockdev write read size > 128k ...passed 00:07:00.763 Test: blockdev write read invalid size ...passed 00:07:00.763 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.763 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.763 Test: blockdev write read max offset ...passed 00:07:00.763 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.763 Test: blockdev writev readv 8 blocks ...passed 00:07:00.763 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.763 Test: blockdev writev readv block ...passed 00:07:00.763 Test: blockdev writev readv size > 128k ...passed 00:07:00.763 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.763 Test: blockdev comparev and writev ...[2024-11-07 09:35:28.369186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b6c0e000 len:0x1000 00:07:00.763 [2024-11-07 09:35:28.369313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.763 00:07:00.763 Test: blockdev nvme passthru rw ...passed 00:07:00.763 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.763 Test: blockdev nvme admin passthru ...passed 00:07:00.763 Test: blockdev copy ...passed 00:07:00.763 Suite: bdevio tests on: Nvme0n1 00:07:00.763 Test: blockdev write read block ...passed 00:07:00.763 Test: blockdev write zeroes read block ...passed 00:07:00.763 Test: blockdev write zeroes read no split ...passed 00:07:00.763 Test: blockdev write zeroes read split ...passed 00:07:00.763 Test: blockdev write zeroes read split partial ...passed 00:07:00.763 Test: blockdev reset ...[2024-11-07 09:35:28.420513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:00.763 [2024-11-07 09:35:28.425226] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:00.763 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.763 passed 00:07:00.763 Test: blockdev write read size > 128k ...passed 00:07:00.763 Test: blockdev write read invalid size ...passed 00:07:00.763 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.763 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.763 Test: blockdev write read max offset ...passed 00:07:00.763 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.763 Test: blockdev writev readv 8 blocks ...passed 00:07:01.024 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.024 Test: blockdev writev readv block ...passed 00:07:01.024 Test: blockdev writev readv size > 128k ...passed 00:07:01.024 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.024 Test: blockdev comparev and writev ...passed 00:07:01.024 Test: blockdev nvme passthru rw ...[2024-11-07 09:35:28.436164] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.024 separate metadata which is not supported yet. 00:07:01.024 passed 00:07:01.024 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.024 Test: blockdev nvme admin passthru ...[2024-11-07 09:35:28.437492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:01.024 [2024-11-07 09:35:28.437539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.024 passed 00:07:01.024 Test: blockdev copy ...passed 00:07:01.024 00:07:01.024 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.024 suites 7 7 n/a 0 0 00:07:01.024 tests 161 161 161 0 0 00:07:01.024 asserts 1025 1025 1025 0 n/a 00:07:01.024 00:07:01.024 Elapsed time = 1.317 seconds 00:07:01.024 0 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61473 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61473 ']' 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61473 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61473 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.024 killing process with pid 61473 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61473' 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61473 00:07:01.024 09:35:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61473 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.592 00:07:01.592 real 0m2.336s 00:07:01.592 user 0m5.794s 00:07:01.592 sys 0m0.377s 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.592 ************************************ 00:07:01.592 END TEST bdev_bounds 00:07:01.592 ************************************ 00:07:01.592 09:35:29 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.592 09:35:29 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:01.592 09:35:29 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.592 09:35:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.592 ************************************ 00:07:01.592 START TEST bdev_nbd 00:07:01.592 ************************************ 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61532 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61532 /var/tmp/spdk-nbd.sock 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61532 ']' 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.592 09:35:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:01.852 [2024-11-07 09:35:29.295989] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:01.852 [2024-11-07 09:35:29.296111] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.852 [2024-11-07 09:35:29.453021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.111 [2024-11-07 09:35:29.549759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.679 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.940 1+0 records in 00:07:02.940 1+0 records out 00:07:02.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977068 s, 4.2 MB/s 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:02.940 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.201 1+0 records in 00:07:03.201 1+0 records out 00:07:03.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501247 s, 8.2 MB/s 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.201 1+0 records in 00:07:03.201 1+0 records out 00:07:03.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059217 s, 6.9 MB/s 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.201 09:35:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.459 1+0 records in 00:07:03.459 1+0 records out 00:07:03.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106635 s, 3.8 MB/s 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.459 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.719 1+0 records in 00:07:03.719 1+0 records out 00:07:03.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572801 s, 7.2 MB/s 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.719 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.980 1+0 records in 00:07:03.980 1+0 records out 00:07:03.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00192356 s, 2.1 MB/s 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.980 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.240 1+0 records in 00:07:04.240 1+0 records out 00:07:04.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873509 s, 4.7 MB/s 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:04.240 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.241 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.241 09:35:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd0", 00:07:04.503 "bdev_name": "Nvme0n1" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd1", 00:07:04.503 "bdev_name": "Nvme1n1p1" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd2", 00:07:04.503 "bdev_name": "Nvme1n1p2" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd3", 00:07:04.503 "bdev_name": "Nvme2n1" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd4", 00:07:04.503 "bdev_name": "Nvme2n2" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd5", 00:07:04.503 "bdev_name": "Nvme2n3" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd6", 00:07:04.503 "bdev_name": "Nvme3n1" 00:07:04.503 } 00:07:04.503 ]' 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd0", 00:07:04.503 "bdev_name": "Nvme0n1" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd1", 00:07:04.503 "bdev_name": "Nvme1n1p1" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd2", 00:07:04.503 "bdev_name": "Nvme1n1p2" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd3", 00:07:04.503 "bdev_name": "Nvme2n1" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd4", 00:07:04.503 "bdev_name": "Nvme2n2" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd5", 00:07:04.503 "bdev_name": "Nvme2n3" 00:07:04.503 }, 00:07:04.503 { 00:07:04.503 "nbd_device": "/dev/nbd6", 00:07:04.503 "bdev_name": "Nvme3n1" 00:07:04.503 } 00:07:04.503 ]' 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.503 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.767 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.028 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.028 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.029 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.289 09:35:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.550 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.810 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.070 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.330 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.330 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.331 09:35:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:06.592 /dev/nbd0 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.592 1+0 records in 00:07:06.592 1+0 records out 00:07:06.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000940583 s, 4.4 MB/s 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.592 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:06.854 /dev/nbd1 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.854 1+0 records in 00:07:06.854 1+0 records out 00:07:06.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108067 s, 3.8 MB/s 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.854 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:07.115 /dev/nbd10 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.115 1+0 records in 00:07:07.115 1+0 records out 00:07:07.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100647 s, 4.1 MB/s 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.115 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:07.377 /dev/nbd11 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.377 1+0 records in 00:07:07.377 1+0 records out 00:07:07.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101992 s, 4.0 MB/s 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.377 09:35:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:07.377 /dev/nbd12 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.640 1+0 records in 00:07:07.640 1+0 records out 00:07:07.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010294 s, 4.0 MB/s 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.640 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:07.640 /dev/nbd13 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.901 1+0 records in 00:07:07.901 1+0 records out 00:07:07.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950107 s, 4.3 MB/s 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.901 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:08.164 /dev/nbd14 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.164 1+0 records in 00:07:08.164 1+0 records out 00:07:08.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106136 s, 3.9 MB/s 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd0", 00:07:08.164 "bdev_name": "Nvme0n1" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd1", 00:07:08.164 "bdev_name": "Nvme1n1p1" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd10", 00:07:08.164 "bdev_name": "Nvme1n1p2" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd11", 00:07:08.164 "bdev_name": "Nvme2n1" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd12", 00:07:08.164 "bdev_name": "Nvme2n2" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd13", 00:07:08.164 "bdev_name": "Nvme2n3" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd14", 00:07:08.164 "bdev_name": "Nvme3n1" 00:07:08.164 } 00:07:08.164 ]' 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd0", 00:07:08.164 "bdev_name": "Nvme0n1" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd1", 00:07:08.164 "bdev_name": "Nvme1n1p1" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd10", 00:07:08.164 "bdev_name": "Nvme1n1p2" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd11", 00:07:08.164 "bdev_name": "Nvme2n1" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd12", 00:07:08.164 "bdev_name": "Nvme2n2" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd13", 00:07:08.164 "bdev_name": "Nvme2n3" 00:07:08.164 }, 00:07:08.164 { 00:07:08.164 "nbd_device": "/dev/nbd14", 00:07:08.164 "bdev_name": "Nvme3n1" 00:07:08.164 } 00:07:08.164 ]' 00:07:08.164 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.425 /dev/nbd1 00:07:08.425 /dev/nbd10 00:07:08.425 /dev/nbd11 00:07:08.425 /dev/nbd12 00:07:08.425 /dev/nbd13 00:07:08.425 /dev/nbd14' 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.425 /dev/nbd1 00:07:08.425 /dev/nbd10 00:07:08.425 /dev/nbd11 00:07:08.425 /dev/nbd12 00:07:08.425 /dev/nbd13 00:07:08.425 /dev/nbd14' 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:08.425 256+0 records in 00:07:08.425 256+0 records out 00:07:08.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542546 s, 193 MB/s 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.425 09:35:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.425 256+0 records in 00:07:08.425 256+0 records out 00:07:08.425 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168971 s, 6.2 MB/s 00:07:08.425 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.425 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.686 256+0 records in 00:07:08.686 256+0 records out 00:07:08.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172452 s, 6.1 MB/s 00:07:08.686 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.686 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:08.949 256+0 records in 00:07:08.949 256+0 records out 00:07:08.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163778 s, 6.4 MB/s 00:07:08.949 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.949 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:08.949 256+0 records in 00:07:08.949 256+0 records out 00:07:08.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188324 s, 5.6 MB/s 00:07:08.949 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.949 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:09.210 256+0 records in 00:07:09.210 256+0 records out 00:07:09.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148195 s, 7.1 MB/s 00:07:09.210 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.210 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:09.210 256+0 records in 00:07:09.210 256+0 records out 00:07:09.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147124 s, 7.1 MB/s 00:07:09.210 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.210 09:35:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:09.472 256+0 records in 00:07:09.472 256+0 records out 00:07:09.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148833 s, 7.0 MB/s 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.472 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.733 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.995 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.258 09:35:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.572 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.833 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.094 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:11.356 09:35:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:11.356 malloc_lvol_verify 00:07:11.617 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:11.617 32385dd7-f194-402c-8e88-7f4621b909db 00:07:11.617 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:11.878 a7054fb2-3159-4ba5-871a-8311073d363c 00:07:11.878 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:12.140 /dev/nbd0 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:12.140 mke2fs 1.47.0 (5-Feb-2023) 00:07:12.140 Discarding device blocks: 0/4096 done 00:07:12.140 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:12.140 00:07:12.140 Allocating group tables: 0/1 done 00:07:12.140 Writing inode tables: 0/1 done 00:07:12.140 Creating journal (1024 blocks): done 00:07:12.140 Writing superblocks and filesystem accounting information: 0/1 done 00:07:12.140 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.140 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61532 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61532 ']' 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61532 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61532 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.402 killing process with pid 61532 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61532' 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61532 00:07:12.402 09:35:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61532 00:07:13.345 09:35:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:13.345 00:07:13.345 real 0m11.448s 00:07:13.345 user 0m15.940s 00:07:13.345 sys 0m3.752s 00:07:13.345 09:35:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.345 ************************************ 00:07:13.345 END TEST bdev_nbd 00:07:13.345 ************************************ 00:07:13.345 09:35:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.345 09:35:40 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:13.345 09:35:40 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:13.345 09:35:40 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:13.345 skipping fio tests on NVMe due to multi-ns failures. 00:07:13.345 09:35:40 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:13.345 09:35:40 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:13.345 09:35:40 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.345 09:35:40 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:13.345 09:35:40 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.345 09:35:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.345 ************************************ 00:07:13.345 START TEST bdev_verify 00:07:13.345 ************************************ 00:07:13.345 09:35:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.345 [2024-11-07 09:35:40.787359] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:13.345 [2024-11-07 09:35:40.787489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:07:13.345 [2024-11-07 09:35:40.950105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.606 [2024-11-07 09:35:41.051703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.606 [2024-11-07 09:35:41.051727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.179 Running I/O for 5 seconds... 00:07:16.506 18624.00 IOPS, 72.75 MiB/s [2024-11-07T09:35:45.129Z] 20032.00 IOPS, 78.25 MiB/s [2024-11-07T09:35:46.073Z] 20522.67 IOPS, 80.17 MiB/s [2024-11-07T09:35:47.016Z] 20016.00 IOPS, 78.19 MiB/s [2024-11-07T09:35:47.016Z] 19584.00 IOPS, 76.50 MiB/s 00:07:19.345 Latency(us) 00:07:19.345 [2024-11-07T09:35:47.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.345 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.345 Verification LBA range: start 0x0 length 0xbd0bd 00:07:19.345 Nvme0n1 : 5.10 1355.23 5.29 0.00 0.00 94251.11 18249.26 89128.96 00:07:19.346 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:19.346 Nvme0n1 : 5.05 1394.48 5.45 0.00 0.00 91399.68 18551.73 79046.50 00:07:19.346 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x0 length 0x4ff80 00:07:19.346 Nvme1n1p1 : 5.10 1354.76 5.29 0.00 0.00 94151.07 16232.76 84692.68 00:07:19.346 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:19.346 Nvme1n1p1 : 5.08 1397.86 5.46 0.00 0.00 91047.88 15426.17 74610.22 00:07:19.346 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x0 length 0x4ff7f 00:07:19.346 Nvme1n1p2 : 5.11 1353.87 5.29 0.00 0.00 93920.75 18350.08 73400.32 00:07:19.346 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:19.346 Nvme1n1p2 : 5.08 1397.19 5.46 0.00 0.00 90812.33 13308.85 75416.81 00:07:19.346 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x0 length 0x80000 00:07:19.346 Nvme2n1 : 5.11 1352.80 5.28 0.00 0.00 93760.90 21173.17 70577.23 00:07:19.346 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x80000 length 0x80000 00:07:19.346 Nvme2n1 : 5.09 1396.45 5.45 0.00 0.00 90728.13 14014.62 79046.50 00:07:19.346 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x0 length 0x80000 00:07:19.346 Nvme2n2 : 5.11 1351.67 5.28 0.00 0.00 93674.35 20971.52 71787.13 00:07:19.346 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x80000 length 0x80000 00:07:19.346 Nvme2n2 : 5.10 1404.17 5.49 0.00 0.00 90395.15 10032.05 81062.99 00:07:19.346 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x0 length 0x80000 00:07:19.346 Nvme2n3 : 5.12 1351.31 5.28 0.00 0.00 93571.51 20568.22 73400.32 00:07:19.346 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x80000 length 0x80000 00:07:19.346 Nvme2n3 : 5.11 1403.31 5.48 0.00 0.00 90260.10 11998.13 81062.99 00:07:19.346 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x0 length 0x20000 00:07:19.346 Nvme3n1 : 5.12 1350.94 5.28 0.00 0.00 93462.22 19559.98 74610.22 00:07:19.346 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.346 Verification LBA range: start 0x20000 length 0x20000 00:07:19.346 Nvme3n1 : 5.11 1402.59 5.48 0.00 0.00 90165.48 12703.90 76223.41 00:07:19.346 [2024-11-07T09:35:47.017Z] =================================================================================================================== 00:07:19.346 [2024-11-07T09:35:47.017Z] Total : 19266.63 75.26 0.00 0.00 92232.39 10032.05 89128.96 00:07:20.288 00:07:20.288 real 0m7.168s 00:07:20.288 user 0m13.302s 00:07:20.288 sys 0m0.261s 00:07:20.288 09:35:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:20.288 09:35:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.288 ************************************ 00:07:20.288 END TEST bdev_verify 00:07:20.288 ************************************ 00:07:20.288 09:35:47 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:20.288 09:35:47 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:20.288 09:35:47 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.288 09:35:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.548 ************************************ 00:07:20.548 START TEST bdev_verify_big_io 00:07:20.548 ************************************ 00:07:20.548 09:35:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:20.548 [2024-11-07 09:35:48.039666] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:20.548 [2024-11-07 09:35:48.039827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:07:20.548 [2024-11-07 09:35:48.206870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.810 [2024-11-07 09:35:48.341005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.810 [2024-11-07 09:35:48.341110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.786 Running I/O for 5 seconds... 00:07:27.649 2641.00 IOPS, 165.06 MiB/s [2024-11-07T09:35:55.892Z] 3124.50 IOPS, 195.28 MiB/s 00:07:28.221 Latency(us) 00:07:28.221 [2024-11-07T09:35:55.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.221 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0xbd0b 00:07:28.221 Nvme0n1 : 6.18 49.16 3.07 0.00 0.00 2447232.64 22383.06 2297188.04 00:07:28.221 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:28.221 Nvme0n1 : 5.74 129.67 8.10 0.00 0.00 944209.31 22282.24 1109877.37 00:07:28.221 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0x4ff8 00:07:28.221 Nvme1n1p1 : 6.05 68.46 4.28 0.00 0.00 1698196.30 63317.86 1690627.15 00:07:28.221 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:28.221 Nvme1n1p1 : 5.81 118.84 7.43 0.00 0.00 1005530.90 96388.33 1645457.72 00:07:28.221 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0x4ff7 00:07:28.221 Nvme1n1p2 : 6.08 73.74 4.61 0.00 0.00 1510067.26 25206.15 1651910.50 00:07:28.221 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:28.221 Nvme1n1p2 : 5.81 112.95 7.06 0.00 0.00 1031488.07 114536.76 1497043.89 00:07:28.221 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0x8000 00:07:28.221 Nvme2n1 : 6.14 79.42 4.96 0.00 0.00 1317384.96 24298.73 1690627.15 00:07:28.221 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x8000 length 0x8000 00:07:28.221 Nvme2n1 : 5.87 135.72 8.48 0.00 0.00 836565.49 66947.54 1045349.61 00:07:28.221 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0x8000 00:07:28.221 Nvme2n2 : 6.26 99.28 6.20 0.00 0.00 1023011.12 20366.57 1716438.25 00:07:28.221 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x8000 length 0x8000 00:07:28.221 Nvme2n2 : 5.88 141.60 8.85 0.00 0.00 789157.90 61704.66 1077613.49 00:07:28.221 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0x8000 00:07:28.221 Nvme2n3 : 6.45 148.92 9.31 0.00 0.00 652692.77 8116.38 1768060.46 00:07:28.221 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x8000 length 0x8000 00:07:28.221 Nvme2n3 : 5.94 150.81 9.43 0.00 0.00 722815.38 26819.35 1109877.37 00:07:28.221 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x0 length 0x2000 00:07:28.221 Nvme3n1 : 6.69 264.34 16.52 0.00 0.00 351944.08 444.26 1806777.11 00:07:28.221 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.221 Verification LBA range: start 0x2000 length 0x2000 00:07:28.221 Nvme3n1 : 5.99 165.20 10.32 0.00 0.00 641226.59 2949.12 1142141.24 00:07:28.221 [2024-11-07T09:35:55.892Z] =================================================================================================================== 00:07:28.221 [2024-11-07T09:35:55.892Z] Total : 1738.10 108.63 0.00 0.00 879427.98 444.26 2297188.04 00:07:30.137 00:07:30.137 real 0m9.717s 00:07:30.137 user 0m18.304s 00:07:30.137 sys 0m0.339s 00:07:30.137 09:35:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.137 09:35:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:30.137 ************************************ 00:07:30.137 END TEST bdev_verify_big_io 00:07:30.137 ************************************ 00:07:30.137 09:35:57 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.137 09:35:57 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:30.137 09:35:57 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.137 09:35:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.137 ************************************ 00:07:30.137 START TEST bdev_write_zeroes 00:07:30.137 ************************************ 00:07:30.137 09:35:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.399 [2024-11-07 09:35:57.823799] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:30.399 [2024-11-07 09:35:57.823949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62158 ] 00:07:30.399 [2024-11-07 09:35:57.988172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.660 [2024-11-07 09:35:58.096124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.231 Running I/O for 1 seconds... 00:07:32.169 47471.00 IOPS, 185.43 MiB/s 00:07:32.169 Latency(us) 00:07:32.169 [2024-11-07T09:35:59.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.169 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.169 Nvme0n1 : 1.03 6767.36 26.43 0.00 0.00 18868.70 7108.14 29642.44 00:07:32.169 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.169 Nvme1n1p1 : 1.03 6775.10 26.47 0.00 0.00 18816.59 12451.84 29037.49 00:07:32.169 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.169 Nvme1n1p2 : 1.03 6766.58 26.43 0.00 0.00 18780.95 11443.59 29239.14 00:07:32.169 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.169 Nvme2n1 : 1.03 6758.83 26.40 0.00 0.00 18771.99 11141.12 30449.03 00:07:32.169 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.170 Nvme2n2 : 1.03 6750.96 26.37 0.00 0.00 18765.33 11393.18 30045.74 00:07:32.170 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.170 Nvme2n3 : 1.03 6743.25 26.34 0.00 0.00 18756.98 12401.43 29239.14 00:07:32.170 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.170 Nvme3n1 : 1.04 6735.56 26.31 0.00 0.00 18691.61 9175.04 28432.54 00:07:32.170 [2024-11-07T09:35:59.841Z] =================================================================================================================== 00:07:32.170 [2024-11-07T09:35:59.841Z] Total : 47297.64 184.76 0.00 0.00 18778.85 7108.14 30449.03 00:07:33.110 00:07:33.110 real 0m2.921s 00:07:33.110 user 0m2.558s 00:07:33.110 sys 0m0.238s 00:07:33.110 09:36:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.110 09:36:00 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:33.110 ************************************ 00:07:33.110 END TEST bdev_write_zeroes 00:07:33.110 ************************************ 00:07:33.110 09:36:00 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.110 09:36:00 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:33.110 09:36:00 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.110 09:36:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.110 ************************************ 00:07:33.110 START TEST bdev_json_nonenclosed 00:07:33.110 ************************************ 00:07:33.110 09:36:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.369 [2024-11-07 09:36:00.814376] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:33.369 [2024-11-07 09:36:00.814552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62211 ] 00:07:33.369 [2024-11-07 09:36:00.984198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.631 [2024-11-07 09:36:01.124347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.631 [2024-11-07 09:36:01.124464] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:33.631 [2024-11-07 09:36:01.124487] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:33.631 [2024-11-07 09:36:01.124498] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.891 00:07:33.891 real 0m0.617s 00:07:33.891 user 0m0.386s 00:07:33.891 sys 0m0.123s 00:07:33.891 ************************************ 00:07:33.891 END TEST bdev_json_nonenclosed 00:07:33.891 ************************************ 00:07:33.891 09:36:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.891 09:36:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:33.892 09:36:01 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.892 09:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:33.892 09:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.892 09:36:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.892 ************************************ 00:07:33.892 START TEST bdev_json_nonarray 00:07:33.892 ************************************ 00:07:33.892 09:36:01 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.892 [2024-11-07 09:36:01.489809] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:33.892 [2024-11-07 09:36:01.489973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62242 ] 00:07:34.152 [2024-11-07 09:36:01.649739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.152 [2024-11-07 09:36:01.800237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.152 [2024-11-07 09:36:01.800374] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:34.152 [2024-11-07 09:36:01.800397] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:34.152 [2024-11-07 09:36:01.800409] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.414 00:07:34.414 real 0m0.608s 00:07:34.414 user 0m0.367s 00:07:34.414 sys 0m0.134s 00:07:34.414 09:36:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.414 ************************************ 00:07:34.414 09:36:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:34.414 END TEST bdev_json_nonarray 00:07:34.414 ************************************ 00:07:34.414 09:36:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:34.414 09:36:02 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:34.414 09:36:02 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:34.414 09:36:02 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.414 09:36:02 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.414 09:36:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.677 ************************************ 00:07:34.677 START TEST bdev_gpt_uuid 00:07:34.677 ************************************ 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62267 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62267 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62267 ']' 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:34.677 09:36:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:34.677 [2024-11-07 09:36:02.187093] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:34.677 [2024-11-07 09:36:02.187262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62267 ] 00:07:34.938 [2024-11-07 09:36:02.355085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.938 [2024-11-07 09:36:02.505682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.882 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.882 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:07:35.882 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.882 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.882 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.144 Some configs were skipped because the RPC state that can call them passed over. 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:36.144 { 00:07:36.144 "name": "Nvme1n1p1", 00:07:36.144 "aliases": [ 00:07:36.144 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:36.144 ], 00:07:36.144 "product_name": "GPT Disk", 00:07:36.144 "block_size": 4096, 00:07:36.144 "num_blocks": 655104, 00:07:36.144 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:36.144 "assigned_rate_limits": { 00:07:36.144 "rw_ios_per_sec": 0, 00:07:36.144 "rw_mbytes_per_sec": 0, 00:07:36.144 "r_mbytes_per_sec": 0, 00:07:36.144 "w_mbytes_per_sec": 0 00:07:36.144 }, 00:07:36.144 "claimed": false, 00:07:36.144 "zoned": false, 00:07:36.144 "supported_io_types": { 00:07:36.144 "read": true, 00:07:36.144 "write": true, 00:07:36.144 "unmap": true, 00:07:36.144 "flush": true, 00:07:36.144 "reset": true, 00:07:36.144 "nvme_admin": false, 00:07:36.144 "nvme_io": false, 00:07:36.144 "nvme_io_md": false, 00:07:36.144 "write_zeroes": true, 00:07:36.144 "zcopy": false, 00:07:36.144 "get_zone_info": false, 00:07:36.144 "zone_management": false, 00:07:36.144 "zone_append": false, 00:07:36.144 "compare": true, 00:07:36.144 "compare_and_write": false, 00:07:36.144 "abort": true, 00:07:36.144 "seek_hole": false, 00:07:36.144 "seek_data": false, 00:07:36.144 "copy": true, 00:07:36.144 "nvme_iov_md": false 00:07:36.144 }, 00:07:36.144 "driver_specific": { 00:07:36.144 "gpt": { 00:07:36.144 "base_bdev": "Nvme1n1", 00:07:36.144 "offset_blocks": 256, 00:07:36.144 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:36.144 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:36.144 "partition_name": "SPDK_TEST_first" 00:07:36.144 } 00:07:36.144 } 00:07:36.144 } 00:07:36.144 ]' 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.144 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:36.144 { 00:07:36.144 "name": "Nvme1n1p2", 00:07:36.144 "aliases": [ 00:07:36.145 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:36.145 ], 00:07:36.145 "product_name": "GPT Disk", 00:07:36.145 "block_size": 4096, 00:07:36.145 "num_blocks": 655103, 00:07:36.145 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:36.145 "assigned_rate_limits": { 00:07:36.145 "rw_ios_per_sec": 0, 00:07:36.145 "rw_mbytes_per_sec": 0, 00:07:36.145 "r_mbytes_per_sec": 0, 00:07:36.145 "w_mbytes_per_sec": 0 00:07:36.145 }, 00:07:36.145 "claimed": false, 00:07:36.145 "zoned": false, 00:07:36.145 "supported_io_types": { 00:07:36.145 "read": true, 00:07:36.145 "write": true, 00:07:36.145 "unmap": true, 00:07:36.145 "flush": true, 00:07:36.145 "reset": true, 00:07:36.145 "nvme_admin": false, 00:07:36.145 "nvme_io": false, 00:07:36.145 "nvme_io_md": false, 00:07:36.145 "write_zeroes": true, 00:07:36.145 "zcopy": false, 00:07:36.145 "get_zone_info": false, 00:07:36.145 "zone_management": false, 00:07:36.145 "zone_append": false, 00:07:36.145 "compare": true, 00:07:36.145 "compare_and_write": false, 00:07:36.145 "abort": true, 00:07:36.145 "seek_hole": false, 00:07:36.145 "seek_data": false, 00:07:36.145 "copy": true, 00:07:36.145 "nvme_iov_md": false 00:07:36.145 }, 00:07:36.145 "driver_specific": { 00:07:36.145 "gpt": { 00:07:36.145 "base_bdev": "Nvme1n1", 00:07:36.145 "offset_blocks": 655360, 00:07:36.145 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:36.145 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:36.145 "partition_name": "SPDK_TEST_second" 00:07:36.145 } 00:07:36.145 } 00:07:36.145 } 00:07:36.145 ]' 00:07:36.145 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62267 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62267 ']' 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62267 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62267 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:36.405 killing process with pid 62267 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62267' 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62267 00:07:36.405 09:36:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62267 00:07:38.337 00:07:38.337 real 0m3.455s 00:07:38.337 user 0m3.397s 00:07:38.337 sys 0m0.604s 00:07:38.337 09:36:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.337 09:36:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.337 ************************************ 00:07:38.337 END TEST bdev_gpt_uuid 00:07:38.337 ************************************ 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:38.337 09:36:05 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.599 Waiting for block devices as requested 00:07:38.599 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.599 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.859 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.859 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.139 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:44.139 09:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:44.139 09:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:44.139 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:44.139 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:44.139 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:44.139 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:44.139 09:36:11 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:44.139 00:07:44.139 real 0m59.007s 00:07:44.139 user 1m14.603s 00:07:44.139 sys 0m8.743s 00:07:44.139 09:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.139 ************************************ 00:07:44.139 END TEST blockdev_nvme_gpt 00:07:44.139 09:36:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.139 ************************************ 00:07:44.139 09:36:11 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:44.139 09:36:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:44.139 09:36:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.139 09:36:11 -- common/autotest_common.sh@10 -- # set +x 00:07:44.139 ************************************ 00:07:44.139 START TEST nvme 00:07:44.139 ************************************ 00:07:44.139 09:36:11 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:44.399 * Looking for test storage... 00:07:44.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:44.399 09:36:11 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:44.399 09:36:11 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:44.399 09:36:11 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:44.399 09:36:11 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.399 09:36:11 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.399 09:36:11 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.399 09:36:11 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.399 09:36:11 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.399 09:36:11 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.399 09:36:11 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:44.399 09:36:11 nvme -- scripts/common.sh@345 -- # : 1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.399 09:36:11 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.399 09:36:11 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@353 -- # local d=1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.399 09:36:11 nvme -- scripts/common.sh@355 -- # echo 1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.399 09:36:11 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@353 -- # local d=2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.399 09:36:11 nvme -- scripts/common.sh@355 -- # echo 2 00:07:44.399 09:36:11 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.400 09:36:11 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.400 09:36:11 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.400 09:36:11 nvme -- scripts/common.sh@368 -- # return 0 00:07:44.400 09:36:11 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.400 09:36:11 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:44.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.400 --rc genhtml_branch_coverage=1 00:07:44.400 --rc genhtml_function_coverage=1 00:07:44.400 --rc genhtml_legend=1 00:07:44.400 --rc geninfo_all_blocks=1 00:07:44.400 --rc geninfo_unexecuted_blocks=1 00:07:44.400 00:07:44.400 ' 00:07:44.400 09:36:11 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:44.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.400 --rc genhtml_branch_coverage=1 00:07:44.400 --rc genhtml_function_coverage=1 00:07:44.400 --rc genhtml_legend=1 00:07:44.400 --rc geninfo_all_blocks=1 00:07:44.400 --rc geninfo_unexecuted_blocks=1 00:07:44.400 00:07:44.400 ' 00:07:44.400 09:36:11 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:44.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.400 --rc genhtml_branch_coverage=1 00:07:44.400 --rc genhtml_function_coverage=1 00:07:44.400 --rc genhtml_legend=1 00:07:44.400 --rc geninfo_all_blocks=1 00:07:44.400 --rc geninfo_unexecuted_blocks=1 00:07:44.400 00:07:44.400 ' 00:07:44.400 09:36:11 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:44.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.400 --rc genhtml_branch_coverage=1 00:07:44.400 --rc genhtml_function_coverage=1 00:07:44.400 --rc genhtml_legend=1 00:07:44.400 --rc geninfo_all_blocks=1 00:07:44.400 --rc geninfo_unexecuted_blocks=1 00:07:44.400 00:07:44.400 ' 00:07:44.400 09:36:11 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:44.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.233 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.494 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.494 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.494 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.495 09:36:13 nvme -- nvme/nvme.sh@79 -- # uname 00:07:45.495 09:36:13 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:45.495 09:36:13 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:45.495 09:36:13 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1073 -- # stubpid=62907 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:45.495 Waiting for stub to ready for secondary processes... 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62907 ]] 00:07:45.495 09:36:13 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:07:45.495 [2024-11-07 09:36:13.067131] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:45.495 [2024-11-07 09:36:13.067249] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:46.438 [2024-11-07 09:36:13.824028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.438 [2024-11-07 09:36:13.933884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.438 [2024-11-07 09:36:13.934163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.438 [2024-11-07 09:36:13.934165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.438 [2024-11-07 09:36:13.949153] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:46.438 [2024-11-07 09:36:13.949189] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.439 [2024-11-07 09:36:13.961445] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:46.439 [2024-11-07 09:36:13.961533] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:46.439 [2024-11-07 09:36:13.963572] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.439 [2024-11-07 09:36:13.963749] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:46.439 [2024-11-07 09:36:13.963802] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:46.439 [2024-11-07 09:36:13.965365] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.439 [2024-11-07 09:36:13.965486] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:46.439 [2024-11-07 09:36:13.965531] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:46.439 [2024-11-07 09:36:13.967617] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.439 [2024-11-07 09:36:13.967774] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:46.439 [2024-11-07 09:36:13.967830] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:46.439 [2024-11-07 09:36:13.967883] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:46.439 [2024-11-07 09:36:13.967918] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:46.439 done. 00:07:46.439 09:36:14 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:46.439 09:36:14 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:07:46.439 09:36:14 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:46.439 09:36:14 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:07:46.439 09:36:14 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.439 09:36:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.439 ************************************ 00:07:46.439 START TEST nvme_reset 00:07:46.439 ************************************ 00:07:46.439 09:36:14 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:46.700 Initializing NVMe Controllers 00:07:46.700 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:46.700 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:46.700 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:46.700 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:46.700 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:46.700 00:07:46.700 real 0m0.223s 00:07:46.700 user 0m0.066s 00:07:46.700 sys 0m0.113s 00:07:46.700 09:36:14 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.700 09:36:14 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:46.700 ************************************ 00:07:46.700 END TEST nvme_reset 00:07:46.700 ************************************ 00:07:46.700 09:36:14 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:46.700 09:36:14 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.700 09:36:14 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.700 09:36:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.700 ************************************ 00:07:46.700 START TEST nvme_identify 00:07:46.700 ************************************ 00:07:46.700 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:07:46.700 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:46.700 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:46.700 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:46.700 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:46.700 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:46.700 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:46.700 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:46.700 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:46.700 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.964 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:46.964 09:36:14 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.964 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:46.964 [2024-11-07 09:36:14.593284] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62928 terminated unexpected 00:07:46.964 ===================================================== 00:07:46.964 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.964 ===================================================== 00:07:46.964 Controller Capabilities/Features 00:07:46.964 ================================ 00:07:46.964 Vendor ID: 1b36 00:07:46.964 Subsystem Vendor ID: 1af4 00:07:46.964 Serial Number: 12343 00:07:46.964 Model Number: QEMU NVMe Ctrl 00:07:46.964 Firmware Version: 8.0.0 00:07:46.964 Recommended Arb Burst: 6 00:07:46.964 IEEE OUI Identifier: 00 54 52 00:07:46.964 Multi-path I/O 00:07:46.964 May have multiple subsystem ports: No 00:07:46.964 May have multiple controllers: Yes 00:07:46.964 Associated with SR-IOV VF: No 00:07:46.964 Max Data Transfer Size: 524288 00:07:46.964 Max Number of Namespaces: 256 00:07:46.964 Max Number of I/O Queues: 64 00:07:46.964 NVMe Specification Version (VS): 1.4 00:07:46.964 NVMe Specification Version (Identify): 1.4 00:07:46.964 Maximum Queue Entries: 2048 00:07:46.964 Contiguous Queues Required: Yes 00:07:46.964 Arbitration Mechanisms Supported 00:07:46.964 Weighted Round Robin: Not Supported 00:07:46.964 Vendor Specific: Not Supported 00:07:46.964 Reset Timeout: 7500 ms 00:07:46.964 Doorbell Stride: 4 bytes 00:07:46.964 NVM Subsystem Reset: Not Supported 00:07:46.964 Command Sets Supported 00:07:46.964 NVM Command Set: Supported 00:07:46.964 Boot Partition: Not Supported 00:07:46.964 Memory Page Size Minimum: 4096 bytes 00:07:46.964 Memory Page Size Maximum: 65536 bytes 00:07:46.964 Persistent Memory Region: Not Supported 00:07:46.964 Optional Asynchronous Events Supported 00:07:46.964 Namespace Attribute Notices: Supported 00:07:46.964 Firmware Activation Notices: Not Supported 00:07:46.964 ANA Change Notices: Not Supported 00:07:46.964 PLE Aggregate Log Change Notices: Not Supported 00:07:46.964 LBA Status Info Alert Notices: Not Supported 00:07:46.964 EGE Aggregate Log Change Notices: Not Supported 00:07:46.964 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.964 Zone Descriptor Change Notices: Not Supported 00:07:46.964 Discovery Log Change Notices: Not Supported 00:07:46.964 Controller Attributes 00:07:46.965 128-bit Host Identifier: Not Supported 00:07:46.965 Non-Operational Permissive Mode: Not Supported 00:07:46.965 NVM Sets: Not Supported 00:07:46.965 Read Recovery Levels: Not Supported 00:07:46.965 Endurance Groups: Supported 00:07:46.965 Predictable Latency Mode: Not Supported 00:07:46.965 Traffic Based Keep ALive: Not Supported 00:07:46.965 Namespace Granularity: Not Supported 00:07:46.965 SQ Associations: Not Supported 00:07:46.965 UUID List: Not Supported 00:07:46.965 Multi-Domain Subsystem: Not Supported 00:07:46.965 Fixed Capacity Management: Not Supported 00:07:46.965 Variable Capacity Management: Not Supported 00:07:46.965 Delete Endurance Group: Not Supported 00:07:46.965 Delete NVM Set: Not Supported 00:07:46.965 Extended LBA Formats Supported: Supported 00:07:46.965 Flexible Data Placement Supported: Supported 00:07:46.965 00:07:46.965 Controller Memory Buffer Support 00:07:46.965 ================================ 00:07:46.965 Supported: No 00:07:46.965 00:07:46.965 Persistent Memory Region Support 00:07:46.965 ================================ 00:07:46.965 Supported: No 00:07:46.965 00:07:46.965 Admin Command Set Attributes 00:07:46.965 ============================ 00:07:46.965 Security Send/Receive: Not Supported 00:07:46.965 Format NVM: Supported 00:07:46.965 Firmware Activate/Download: Not Supported 00:07:46.965 Namespace Management: Supported 00:07:46.965 Device Self-Test: Not Supported 00:07:46.965 Directives: Supported 00:07:46.965 NVMe-MI: Not Supported 00:07:46.965 Virtualization Management: Not Supported 00:07:46.965 Doorbell Buffer Config: Supported 00:07:46.965 Get LBA Status Capability: Not Supported 00:07:46.965 Command & Feature Lockdown Capability: Not Supported 00:07:46.965 Abort Command Limit: 4 00:07:46.965 Async Event Request Limit: 4 00:07:46.965 Number of Firmware Slots: N/A 00:07:46.965 Firmware Slot 1 Read-Only: N/A 00:07:46.965 Firmware Activation Without Reset: N/A 00:07:46.965 Multiple Update Detection Support: N/A 00:07:46.965 Firmware Update Granularity: No Information Provided 00:07:46.965 Per-Namespace SMART Log: Yes 00:07:46.965 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.965 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:46.965 Command Effects Log Page: Supported 00:07:46.965 Get Log Page Extended Data: Supported 00:07:46.965 Telemetry Log Pages: Not Supported 00:07:46.965 Persistent Event Log Pages: Not Supported 00:07:46.965 Supported Log Pages Log Page: May Support 00:07:46.965 Commands Supported & Effects Log Page: Not Supported 00:07:46.965 Feature Identifiers & Effects Log Page:May Support 00:07:46.965 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.965 Data Area 4 for Telemetry Log: Not Supported 00:07:46.965 Error Log Page Entries Supported: 1 00:07:46.965 Keep Alive: Not Supported 00:07:46.965 00:07:46.965 NVM Command Set Attributes 00:07:46.965 ========================== 00:07:46.965 Submission Queue Entry Size 00:07:46.965 Max: 64 00:07:46.965 Min: 64 00:07:46.965 Completion Queue Entry Size 00:07:46.965 Max: 16 00:07:46.965 Min: 16 00:07:46.965 Number of Namespaces: 256 00:07:46.965 Compare Command: Supported 00:07:46.965 Write Uncorrectable Command: Not Supported 00:07:46.965 Dataset Management Command: Supported 00:07:46.965 Write Zeroes Command: Supported 00:07:46.965 Set Features Save Field: Supported 00:07:46.965 Reservations: Not Supported 00:07:46.965 Timestamp: Supported 00:07:46.965 Copy: Supported 00:07:46.965 Volatile Write Cache: Present 00:07:46.965 Atomic Write Unit (Normal): 1 00:07:46.965 Atomic Write Unit (PFail): 1 00:07:46.965 Atomic Compare & Write Unit: 1 00:07:46.965 Fused Compare & Write: Not Supported 00:07:46.965 Scatter-Gather List 00:07:46.965 SGL Command Set: Supported 00:07:46.965 SGL Keyed: Not Supported 00:07:46.965 SGL Bit Bucket Descriptor: Not Supported 00:07:46.965 SGL Metadata Pointer: Not Supported 00:07:46.965 Oversized SGL: Not Supported 00:07:46.965 SGL Metadata Address: Not Supported 00:07:46.965 SGL Offset: Not Supported 00:07:46.965 Transport SGL Data Block: Not Supported 00:07:46.965 Replay Protected Memory Block: Not Supported 00:07:46.965 00:07:46.965 Firmware Slot Information 00:07:46.965 ========================= 00:07:46.965 Active slot: 1 00:07:46.965 Slot 1 Firmware Revision: 1.0 00:07:46.965 00:07:46.965 00:07:46.965 Commands Supported and Effects 00:07:46.965 ============================== 00:07:46.965 Admin Commands 00:07:46.965 -------------- 00:07:46.965 Delete I/O Submission Queue (00h): Supported 00:07:46.965 Create I/O Submission Queue (01h): Supported 00:07:46.965 Get Log Page (02h): Supported 00:07:46.965 Delete I/O Completion Queue (04h): Supported 00:07:46.965 Create I/O Completion Queue (05h): Supported 00:07:46.965 Identify (06h): Supported 00:07:46.965 Abort (08h): Supported 00:07:46.965 Set Features (09h): Supported 00:07:46.965 Get Features (0Ah): Supported 00:07:46.965 Asynchronous Event Request (0Ch): Supported 00:07:46.965 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.965 Directive Send (19h): Supported 00:07:46.965 Directive Receive (1Ah): Supported 00:07:46.965 Virtualization Management (1Ch): Supported 00:07:46.965 Doorbell Buffer Config (7Ch): Supported 00:07:46.965 Format NVM (80h): Supported LBA-Change 00:07:46.965 I/O Commands 00:07:46.965 ------------ 00:07:46.965 Flush (00h): Supported LBA-Change 00:07:46.965 Write (01h): Supported LBA-Change 00:07:46.965 Read (02h): Supported 00:07:46.965 Compare (05h): Supported 00:07:46.965 Write Zeroes (08h): Supported LBA-Change 00:07:46.965 Dataset Management (09h): Supported LBA-Change 00:07:46.965 Unknown (0Ch): Supported 00:07:46.965 Unknown (12h): Supported 00:07:46.965 Copy (19h): Supported LBA-Change 00:07:46.965 Unknown (1Dh): Supported LBA-Change 00:07:46.965 00:07:46.965 Error Log 00:07:46.965 ========= 00:07:46.965 00:07:46.965 Arbitration 00:07:46.965 =========== 00:07:46.965 Arbitration Burst: no limit 00:07:46.965 00:07:46.965 Power Management 00:07:46.965 ================ 00:07:46.965 Number of Power States: 1 00:07:46.965 Current Power State: Power State #0 00:07:46.965 Power State #0: 00:07:46.965 Max Power: 25.00 W 00:07:46.965 Non-Operational State: Operational 00:07:46.965 Entry Latency: 16 microseconds 00:07:46.965 Exit Latency: 4 microseconds 00:07:46.965 Relative Read Throughput: 0 00:07:46.965 Relative Read Latency: 0 00:07:46.965 Relative Write Throughput: 0 00:07:46.965 Relative Write Latency: 0 00:07:46.965 Idle Power: Not Reported 00:07:46.965 Active Power: Not Reported 00:07:46.965 Non-Operational Permissive Mode: Not Supported 00:07:46.965 00:07:46.965 Health Information 00:07:46.965 ================== 00:07:46.965 Critical Warnings: 00:07:46.965 Available Spare Space: OK 00:07:46.965 Temperature: OK 00:07:46.965 Device Reliability: OK 00:07:46.965 Read Only: No 00:07:46.965 Volatile Memory Backup: OK 00:07:46.965 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.965 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.965 Available Spare: 0% 00:07:46.965 Available Spare Threshold: 0% 00:07:46.965 Life Percentage Used: 0% 00:07:46.965 Data Units Read: 1078 00:07:46.965 Data Units Written: 1007 00:07:46.965 Host Read Commands: 42039 00:07:46.965 Host Write Commands: 41462 00:07:46.965 Controller Busy Time: 0 minutes 00:07:46.965 Power Cycles: 0 00:07:46.965 Power On Hours: 0 hours 00:07:46.965 Unsafe Shutdowns: 0 00:07:46.965 Unrecoverable Media Errors: 0 00:07:46.965 Lifetime Error Log Entries: 0 00:07:46.965 Warning Temperature Time: 0 minutes 00:07:46.965 Critical Temperature Time: 0 minutes 00:07:46.965 00:07:46.965 Number of Queues 00:07:46.965 ================ 00:07:46.965 Number of I/O Submission Queues: 64 00:07:46.965 Number of I/O Completion Queues: 64 00:07:46.965 00:07:46.965 ZNS Specific Controller Data 00:07:46.965 ============================ 00:07:46.965 Zone Append Size Limit: 0 00:07:46.965 00:07:46.965 00:07:46.965 Active Namespaces 00:07:46.965 ================= 00:07:46.965 Namespace ID:1 00:07:46.965 Error Recovery Timeout: Unlimited 00:07:46.965 Command Set Identifier: NVM (00h) 00:07:46.965 Deallocate: Supported 00:07:46.965 Deallocated/Unwritten Error: Supported 00:07:46.965 Deallocated Read Value: All 0x00 00:07:46.965 Deallocate in Write Zeroes: Not Supported 00:07:46.965 Deallocated Guard Field: 0xFFFF 00:07:46.965 Flush: Supported 00:07:46.965 Reservation: Not Supported 00:07:46.965 Namespace Sharing Capabilities: Multiple Controllers 00:07:46.965 Size (in LBAs): 262144 (1GiB) 00:07:46.965 Capacity (in LBAs): 262144 (1GiB) 00:07:46.965 Utilization (in LBAs): 262144 (1GiB) 00:07:46.966 Thin Provisioning: Not Supported 00:07:46.966 Per-NS Atomic Units: No 00:07:46.966 Maximum Single Source Range Length: 128 00:07:46.966 Maximum Copy Length: 128 00:07:46.966 Maximum Source Range Count: 128 00:07:46.966 NGUID/EUI64 Never Reused: No 00:07:46.966 Namespace Write Protected: No 00:07:46.966 Endurance group ID: 1 00:07:46.966 Number of LBA Formats: 8 00:07:46.966 Current LBA Format: LBA Format #04 00:07:46.966 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.966 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.966 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.966 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.966 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.966 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.966 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.966 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.966 00:07:46.966 Get Feature FDP: 00:07:46.966 ================ 00:07:46.966 Enabled: Yes 00:07:46.966 FDP configuration index: 0 00:07:46.966 00:07:46.966 FDP configurations log page 00:07:46.966 =========================== 00:07:46.966 Number of FDP configurations: 1 00:07:46.966 Version: 0 00:07:46.966 Size: 112 00:07:46.966 FDP Configuration Descriptor: 0 00:07:46.966 Descriptor Size: 96 00:07:46.966 Reclaim Group Identifier format: 2 00:07:46.966 FDP Volatile Write Cache: Not Present 00:07:46.966 FDP Configuration: Valid 00:07:46.966 Vendor Specific Size: 0 00:07:46.966 Number of Reclaim Groups: 2 00:07:46.966 Number of Recalim Unit Handles: 8 00:07:46.966 Max Placement Identifiers: 128 00:07:46.966 Number of Namespaces Suppprted: 256 00:07:46.966 Reclaim unit Nominal Size: 6000000 bytes 00:07:46.966 Estimated Reclaim Unit Time Limit: Not Reported 00:07:46.966 RUH Desc #000: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #001: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #002: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #003: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #004: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #005: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #006: RUH Type: Initially Isolated 00:07:46.966 RUH Desc #007: RUH Type: Initially Isolated 00:07:46.966 00:07:46.966 FDP reclaim unit handle usage log page 00:07:46.966 ================================[2024-11-07 09:36:14.596503] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62928 terminated unexpected 00:07:46.966 ====== 00:07:46.966 Number of Reclaim Unit Handles: 8 00:07:46.966 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:46.966 RUH Usage Desc #001: RUH Attributes: Unused 00:07:46.966 RUH Usage Desc #002: RUH Attributes: Unused 00:07:46.966 RUH Usage Desc #003: RUH Attributes: Unused 00:07:46.966 RUH Usage Desc #004: RUH Attributes: Unused 00:07:46.966 RUH Usage Desc #005: RUH Attributes: Unused 00:07:46.966 RUH Usage Desc #006: RUH Attributes: Unused 00:07:46.966 RUH Usage Desc #007: RUH Attributes: Unused 00:07:46.966 00:07:46.966 FDP statistics log page 00:07:46.966 ======================= 00:07:46.966 Host bytes with metadata written: 606314496 00:07:46.966 Media bytes with metadata written: 606396416 00:07:46.966 Media bytes erased: 0 00:07:46.966 00:07:46.966 FDP events log page 00:07:46.966 =================== 00:07:46.966 Number of FDP events: 0 00:07:46.966 00:07:46.966 NVM Specific Namespace Data 00:07:46.966 =========================== 00:07:46.966 Logical Block Storage Tag Mask: 0 00:07:46.966 Protection Information Capabilities: 00:07:46.966 16b Guard Protection Information Storage Tag Support: No 00:07:46.966 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.966 Storage Tag Check Read Support: No 00:07:46.966 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.966 ===================================================== 00:07:46.966 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.966 ===================================================== 00:07:46.966 Controller Capabilities/Features 00:07:46.966 ================================ 00:07:46.966 Vendor ID: 1b36 00:07:46.966 Subsystem Vendor ID: 1af4 00:07:46.966 Serial Number: 12340 00:07:46.966 Model Number: QEMU NVMe Ctrl 00:07:46.966 Firmware Version: 8.0.0 00:07:46.966 Recommended Arb Burst: 6 00:07:46.966 IEEE OUI Identifier: 00 54 52 00:07:46.966 Multi-path I/O 00:07:46.966 May have multiple subsystem ports: No 00:07:46.966 May have multiple controllers: No 00:07:46.966 Associated with SR-IOV VF: No 00:07:46.966 Max Data Transfer Size: 524288 00:07:46.966 Max Number of Namespaces: 256 00:07:46.966 Max Number of I/O Queues: 64 00:07:46.966 NVMe Specification Version (VS): 1.4 00:07:46.966 NVMe Specification Version (Identify): 1.4 00:07:46.966 Maximum Queue Entries: 2048 00:07:46.966 Contiguous Queues Required: Yes 00:07:46.966 Arbitration Mechanisms Supported 00:07:46.966 Weighted Round Robin: Not Supported 00:07:46.966 Vendor Specific: Not Supported 00:07:46.966 Reset Timeout: 7500 ms 00:07:46.966 Doorbell Stride: 4 bytes 00:07:46.966 NVM Subsystem Reset: Not Supported 00:07:46.966 Command Sets Supported 00:07:46.966 NVM Command Set: Supported 00:07:46.966 Boot Partition: Not Supported 00:07:46.966 Memory Page Size Minimum: 4096 bytes 00:07:46.966 Memory Page Size Maximum: 65536 bytes 00:07:46.966 Persistent Memory Region: Not Supported 00:07:46.966 Optional Asynchronous Events Supported 00:07:46.966 Namespace Attribute Notices: Supported 00:07:46.966 Firmware Activation Notices: Not Supported 00:07:46.966 ANA Change Notices: Not Supported 00:07:46.966 PLE Aggregate Log Change Notices: Not Supported 00:07:46.966 LBA Status Info Alert Notices: Not Supported 00:07:46.966 EGE Aggregate Log Change Notices: Not Supported 00:07:46.966 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.966 Zone Descriptor Change Notices: Not Supported 00:07:46.966 Discovery Log Change Notices: Not Supported 00:07:46.966 Controller Attributes 00:07:46.966 128-bit Host Identifier: Not Supported 00:07:46.966 Non-Operational Permissive Mode: Not Supported 00:07:46.966 NVM Sets: Not Supported 00:07:46.966 Read Recovery Levels: Not Supported 00:07:46.966 Endurance Groups: Not Supported 00:07:46.966 Predictable Latency Mode: Not Supported 00:07:46.966 Traffic Based Keep ALive: Not Supported 00:07:46.966 Namespace Granularity: Not Supported 00:07:46.966 SQ Associations: Not Supported 00:07:46.966 UUID List: Not Supported 00:07:46.966 Multi-Domain Subsystem: Not Supported 00:07:46.966 Fixed Capacity Management: Not Supported 00:07:46.966 Variable Capacity Management: Not Supported 00:07:46.966 Delete Endurance Group: Not Supported 00:07:46.966 Delete NVM Set: Not Supported 00:07:46.966 Extended LBA Formats Supported: Supported 00:07:46.966 Flexible Data Placement Supported: Not Supported 00:07:46.966 00:07:46.966 Controller Memory Buffer Support 00:07:46.966 ================================ 00:07:46.966 Supported: No 00:07:46.966 00:07:46.966 Persistent Memory Region Support 00:07:46.966 ================================ 00:07:46.966 Supported: No 00:07:46.966 00:07:46.966 Admin Command Set Attributes 00:07:46.966 ============================ 00:07:46.966 Security Send/Receive: Not Supported 00:07:46.966 Format NVM: Supported 00:07:46.966 Firmware Activate/Download: Not Supported 00:07:46.966 Namespace Management: Supported 00:07:46.966 Device Self-Test: Not Supported 00:07:46.966 Directives: Supported 00:07:46.966 NVMe-MI: Not Supported 00:07:46.966 Virtualization Management: Not Supported 00:07:46.966 Doorbell Buffer Config: Supported 00:07:46.966 Get LBA Status Capability: Not Supported 00:07:46.966 Command & Feature Lockdown Capability: Not Supported 00:07:46.966 Abort Command Limit: 4 00:07:46.966 Async Event Request Limit: 4 00:07:46.966 Number of Firmware Slots: N/A 00:07:46.966 Firmware Slot 1 Read-Only: N/A 00:07:46.966 Firmware Activation Without Reset: N/A 00:07:46.966 Multiple Update Detection Support: N/A 00:07:46.966 Firmware Update Granularity: No Information Provided 00:07:46.966 Per-Namespace SMART Log: Yes 00:07:46.966 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.966 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:46.966 Command Effects Log Page: Supported 00:07:46.966 Get Log Page Extended Data: Supported 00:07:46.966 Telemetry Log Pages: Not Supported 00:07:46.966 Persistent Event Log Pages: Not Supported 00:07:46.966 Supported Log Pages Log Page: May Support 00:07:46.967 Commands Supported & Effects Log Page: Not Supported 00:07:46.967 Feature Identifiers & Effects Log Page:May Support 00:07:46.967 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.967 Data Area 4 for Telemetry Log: Not Supported 00:07:46.967 Error Log Page Entries Supported: 1 00:07:46.967 Keep Alive: Not Supported 00:07:46.967 00:07:46.967 NVM Command Set Attributes 00:07:46.967 ========================== 00:07:46.967 Submission Queue Entry Size 00:07:46.967 Max: 64 00:07:46.967 Min: 64 00:07:46.967 Completion Queue Entry Size 00:07:46.967 Max: 16 00:07:46.967 Min: 16 00:07:46.967 Number of Namespaces: 256 00:07:46.967 Compare Command: Supported 00:07:46.967 Write Uncorrectable Command: Not Supported 00:07:46.967 Dataset Management Command: Supported 00:07:46.967 Write Zeroes Command: Supported 00:07:46.967 Set Features Save Field: Supported 00:07:46.967 Reservations: Not Supported 00:07:46.967 Timestamp: Supported 00:07:46.967 Copy: Supported 00:07:46.967 Volatile Write Cache: Present 00:07:46.967 Atomic Write Unit (Normal): 1 00:07:46.967 Atomic Write Unit (PFail): 1 00:07:46.967 Atomic Compare & Write Unit: 1 00:07:46.967 Fused Compare & Write: Not Supported 00:07:46.967 Scatter-Gather List 00:07:46.967 SGL Command Set: Supported 00:07:46.967 SGL Keyed: Not Supported 00:07:46.967 SGL Bit Bucket Descriptor: Not Supported 00:07:46.967 SGL Metadata Pointer: Not Supported 00:07:46.967 Oversized SGL: Not Supported 00:07:46.967 SGL Metadata Address: Not Supported 00:07:46.967 SGL Offset: Not Supported 00:07:46.967 Transport SGL Data Block: Not Supported 00:07:46.967 Replay Protected Memory Block: Not Supported 00:07:46.967 00:07:46.967 Firmware Slot Information 00:07:46.967 ========================= 00:07:46.967 Active slot: 1 00:07:46.967 Slot 1 Firmware Revision: 1.0 00:07:46.967 00:07:46.967 00:07:46.967 Commands Supported and Effects 00:07:46.967 ============================== 00:07:46.967 Admin Commands 00:07:46.967 -------------- 00:07:46.967 Delete I/O Submission Queue (00h): Supported 00:07:46.967 Create I/O Submission Queue (01h): Supported 00:07:46.967 Get Log Page (02h): Supported 00:07:46.967 Delete I/O Completion Queue (04h): Supported 00:07:46.967 Create I/O Completion Queue (05h): Supported 00:07:46.967 Identify (06h): Supported 00:07:46.967 Abort (08h): Supported 00:07:46.967 Set Features (09h): Supported 00:07:46.967 Get Features (0Ah): Supported 00:07:46.967 Asynchronous Event Request (0Ch): Supported 00:07:46.967 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.967 Directive Send (19h): Supported 00:07:46.967 Directive Receive (1Ah): Supported 00:07:46.967 Virtualization Management (1Ch): Supported 00:07:46.967 Doorbell Buffer Config (7Ch): Supported 00:07:46.967 Format NVM (80h): Supported LBA-Change 00:07:46.967 I/O Commands 00:07:46.967 ------------ 00:07:46.967 Flush (00h): Supported LBA-Change 00:07:46.967 Write (01h): Supported LBA-Change 00:07:46.967 Read (02h): Supported 00:07:46.967 Compare (05h): Supported 00:07:46.967 Write Zeroes (08h): Supported LBA-Change 00:07:46.967 Dataset Management (09h): Supported LBA-Change 00:07:46.967 Unknown (0Ch): Supported 00:07:46.967 Unknown (12h): Supported 00:07:46.967 Copy (19h): Supported LBA-Change 00:07:46.967 Unknown (1Dh): Supported LBA-Change 00:07:46.967 00:07:46.967 Error Log 00:07:46.967 ========= 00:07:46.967 00:07:46.967 Arbitration 00:07:46.967 =========== 00:07:46.967 Arbitration Burst: no limit 00:07:46.967 00:07:46.967 Power Management 00:07:46.967 ================ 00:07:46.967 Number of Power States: 1 00:07:46.967 Current Power State: Power State #0 00:07:46.967 Power State #0: 00:07:46.967 Max Power: 25.00 W 00:07:46.967 Non-Operational State: Operational 00:07:46.967 Entry Latency: 16 microseconds 00:07:46.967 Exit Latency: 4 microseconds 00:07:46.967 Relative Read Throughput: 0 00:07:46.967 Relative Read Latency: 0 00:07:46.967 Relative Write Throughput: 0 00:07:46.967 Relative Write Latency: 0 00:07:46.967 Idle Power: Not Reported 00:07:46.967 Active Power: Not Reported 00:07:46.967 Non-Operational Permissive Mode: Not Supported 00:07:46.967 00:07:46.967 Health Information 00:07:46.967 ================== 00:07:46.967 Critical Warnings: 00:07:46.967 Available Spare Space: OK 00:07:46.967 Temperature: OK 00:07:46.967 Device Reliability: OK 00:07:46.967 Read Only: No 00:07:46.967 Volatile Memory Backup: OK 00:07:46.967 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.967 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.967 Available Spare: 0% 00:07:46.967 Available Spare Threshold: 0% 00:07:46.967 Life Percentage Used: 0% 00:07:46.967 Data Units Read: 673 00:07:46.967 Data Units Written: 601 00:07:46.967 Host Read Commands: 38519 00:07:46.967 Host Write Commands: 38305 00:07:46.967 Controller Busy Time: 0 minutes 00:07:46.967 Power Cycles: 0 00:07:46.967 Power On Hours: 0 hours 00:07:46.967 Unsafe Shutdowns: 0 00:07:46.967 Unrecoverable Media Errors: 0 00:07:46.967 Lifetime Error Log Entries: 0 00:07:46.967 Warning Temperature Time: 0 minutes 00:07:46.967 Critical Temperature Time: 0 minutes 00:07:46.967 00:07:46.967 Number of Queues 00:07:46.967 ================ 00:07:46.967 Number of I/O Submission Queues: 64 00:07:46.967 Number of I/O Completion Queues: 64 00:07:46.967 00:07:46.967 ZNS Specific Controller Data 00:07:46.967 ============================ 00:07:46.967 Zone Append Size Limit: 0 00:07:46.967 00:07:46.967 00:07:46.967 Active Namespaces 00:07:46.967 ================= 00:07:46.967 Namespace ID:1 00:07:46.967 Error Recovery Timeout: Unlimited 00:07:46.967 Command Set Identifier: NVM (00h) 00:07:46.967 Deallocate: Supported 00:07:46.967 Deallocated/Unwritten Error: Supported 00:07:46.967 Deallocated Read Value: All 0x00 00:07:46.967 Deallocate in Write Zeroes: Not Supported 00:07:46.967 Deallocated Guard Field: 0xFFFF 00:07:46.967 Flush: Supported 00:07:46.967 Reservation: Not Supported 00:07:46.967 Metadata Transferred as: Separate Metadata Buffer 00:07:46.967 Namespace Sharing Capabilities: Private 00:07:46.967 Size (in LBAs): 1548666 (5GiB) 00:07:46.967 Capacity (in LBAs): 1548666 (5GiB) 00:07:46.967 Utilization (in LBAs): 1548666 (5GiB) 00:07:46.967 Thin Provisioning: Not Supported 00:07:46.967 Per-NS Atomic Units: No 00:07:46.967 Maximum Single Source Range Length: 128 00:07:46.967 Maximum Copy Length: 128 00:07:46.967 Maximum Source Range Count: 128 00:07:46.967 NGUID/EUI64 Never Reused: No 00:07:46.967 Namespace Write Protected: No 00:07:46.967 Number of LBA Formats: 8 00:07:46.967 Current LBA Format: [2024-11-07 09:36:14.597775] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62928 terminated unexpected 00:07:46.967 LBA Format #07 00:07:46.967 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.967 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.967 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.967 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.967 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.967 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.967 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.967 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.967 00:07:46.967 NVM Specific Namespace Data 00:07:46.967 =========================== 00:07:46.967 Logical Block Storage Tag Mask: 0 00:07:46.967 Protection Information Capabilities: 00:07:46.967 16b Guard Protection Information Storage Tag Support: No 00:07:46.967 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.967 Storage Tag Check Read Support: No 00:07:46.967 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.967 ===================================================== 00:07:46.967 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.967 ===================================================== 00:07:46.967 Controller Capabilities/Features 00:07:46.967 ================================ 00:07:46.967 Vendor ID: 1b36 00:07:46.967 Subsystem Vendor ID: 1af4 00:07:46.967 Serial Number: 12341 00:07:46.967 Model Number: QEMU NVMe Ctrl 00:07:46.967 Firmware Version: 8.0.0 00:07:46.967 Recommended Arb Burst: 6 00:07:46.967 IEEE OUI Identifier: 00 54 52 00:07:46.967 Multi-path I/O 00:07:46.967 May have multiple subsystem ports: No 00:07:46.967 May have multiple controllers: No 00:07:46.968 Associated with SR-IOV VF: No 00:07:46.968 Max Data Transfer Size: 524288 00:07:46.968 Max Number of Namespaces: 256 00:07:46.968 Max Number of I/O Queues: 64 00:07:46.968 NVMe Specification Version (VS): 1.4 00:07:46.968 NVMe Specification Version (Identify): 1.4 00:07:46.968 Maximum Queue Entries: 2048 00:07:46.968 Contiguous Queues Required: Yes 00:07:46.968 Arbitration Mechanisms Supported 00:07:46.968 Weighted Round Robin: Not Supported 00:07:46.968 Vendor Specific: Not Supported 00:07:46.968 Reset Timeout: 7500 ms 00:07:46.968 Doorbell Stride: 4 bytes 00:07:46.968 NVM Subsystem Reset: Not Supported 00:07:46.968 Command Sets Supported 00:07:46.968 NVM Command Set: Supported 00:07:46.968 Boot Partition: Not Supported 00:07:46.968 Memory Page Size Minimum: 4096 bytes 00:07:46.968 Memory Page Size Maximum: 65536 bytes 00:07:46.968 Persistent Memory Region: Not Supported 00:07:46.968 Optional Asynchronous Events Supported 00:07:46.968 Namespace Attribute Notices: Supported 00:07:46.968 Firmware Activation Notices: Not Supported 00:07:46.968 ANA Change Notices: Not Supported 00:07:46.968 PLE Aggregate Log Change Notices: Not Supported 00:07:46.968 LBA Status Info Alert Notices: Not Supported 00:07:46.968 EGE Aggregate Log Change Notices: Not Supported 00:07:46.968 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.968 Zone Descriptor Change Notices: Not Supported 00:07:46.968 Discovery Log Change Notices: Not Supported 00:07:46.968 Controller Attributes 00:07:46.968 128-bit Host Identifier: Not Supported 00:07:46.968 Non-Operational Permissive Mode: Not Supported 00:07:46.968 NVM Sets: Not Supported 00:07:46.968 Read Recovery Levels: Not Supported 00:07:46.968 Endurance Groups: Not Supported 00:07:46.968 Predictable Latency Mode: Not Supported 00:07:46.968 Traffic Based Keep ALive: Not Supported 00:07:46.968 Namespace Granularity: Not Supported 00:07:46.968 SQ Associations: Not Supported 00:07:46.968 UUID List: Not Supported 00:07:46.968 Multi-Domain Subsystem: Not Supported 00:07:46.968 Fixed Capacity Management: Not Supported 00:07:46.968 Variable Capacity Management: Not Supported 00:07:46.968 Delete Endurance Group: Not Supported 00:07:46.968 Delete NVM Set: Not Supported 00:07:46.968 Extended LBA Formats Supported: Supported 00:07:46.968 Flexible Data Placement Supported: Not Supported 00:07:46.968 00:07:46.968 Controller Memory Buffer Support 00:07:46.968 ================================ 00:07:46.968 Supported: No 00:07:46.968 00:07:46.968 Persistent Memory Region Support 00:07:46.968 ================================ 00:07:46.968 Supported: No 00:07:46.968 00:07:46.968 Admin Command Set Attributes 00:07:46.968 ============================ 00:07:46.968 Security Send/Receive: Not Supported 00:07:46.968 Format NVM: Supported 00:07:46.968 Firmware Activate/Download: Not Supported 00:07:46.968 Namespace Management: Supported 00:07:46.968 Device Self-Test: Not Supported 00:07:46.968 Directives: Supported 00:07:46.968 NVMe-MI: Not Supported 00:07:46.968 Virtualization Management: Not Supported 00:07:46.968 Doorbell Buffer Config: Supported 00:07:46.968 Get LBA Status Capability: Not Supported 00:07:46.968 Command & Feature Lockdown Capability: Not Supported 00:07:46.968 Abort Command Limit: 4 00:07:46.968 Async Event Request Limit: 4 00:07:46.968 Number of Firmware Slots: N/A 00:07:46.968 Firmware Slot 1 Read-Only: N/A 00:07:46.968 Firmware Activation Without Reset: N/A 00:07:46.968 Multiple Update Detection Support: N/A 00:07:46.968 Firmware Update Granularity: No Information Provided 00:07:46.968 Per-Namespace SMART Log: Yes 00:07:46.968 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.968 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:46.968 Command Effects Log Page: Supported 00:07:46.968 Get Log Page Extended Data: Supported 00:07:46.968 Telemetry Log Pages: Not Supported 00:07:46.968 Persistent Event Log Pages: Not Supported 00:07:46.968 Supported Log Pages Log Page: May Support 00:07:46.968 Commands Supported & Effects Log Page: Not Supported 00:07:46.968 Feature Identifiers & Effects Log Page:May Support 00:07:46.968 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.968 Data Area 4 for Telemetry Log: Not Supported 00:07:46.968 Error Log Page Entries Supported: 1 00:07:46.968 Keep Alive: Not Supported 00:07:46.968 00:07:46.968 NVM Command Set Attributes 00:07:46.968 ========================== 00:07:46.968 Submission Queue Entry Size 00:07:46.968 Max: 64 00:07:46.968 Min: 64 00:07:46.968 Completion Queue Entry Size 00:07:46.968 Max: 16 00:07:46.968 Min: 16 00:07:46.968 Number of Namespaces: 256 00:07:46.968 Compare Command: Supported 00:07:46.968 Write Uncorrectable Command: Not Supported 00:07:46.968 Dataset Management Command: Supported 00:07:46.968 Write Zeroes Command: Supported 00:07:46.968 Set Features Save Field: Supported 00:07:46.968 Reservations: Not Supported 00:07:46.968 Timestamp: Supported 00:07:46.968 Copy: Supported 00:07:46.968 Volatile Write Cache: Present 00:07:46.968 Atomic Write Unit (Normal): 1 00:07:46.968 Atomic Write Unit (PFail): 1 00:07:46.968 Atomic Compare & Write Unit: 1 00:07:46.968 Fused Compare & Write: Not Supported 00:07:46.968 Scatter-Gather List 00:07:46.968 SGL Command Set: Supported 00:07:46.968 SGL Keyed: Not Supported 00:07:46.968 SGL Bit Bucket Descriptor: Not Supported 00:07:46.968 SGL Metadata Pointer: Not Supported 00:07:46.968 Oversized SGL: Not Supported 00:07:46.968 SGL Metadata Address: Not Supported 00:07:46.968 SGL Offset: Not Supported 00:07:46.968 Transport SGL Data Block: Not Supported 00:07:46.968 Replay Protected Memory Block: Not Supported 00:07:46.968 00:07:46.968 Firmware Slot Information 00:07:46.968 ========================= 00:07:46.968 Active slot: 1 00:07:46.968 Slot 1 Firmware Revision: 1.0 00:07:46.968 00:07:46.968 00:07:46.968 Commands Supported and Effects 00:07:46.968 ============================== 00:07:46.968 Admin Commands 00:07:46.968 -------------- 00:07:46.968 Delete I/O Submission Queue (00h): Supported 00:07:46.968 Create I/O Submission Queue (01h): Supported 00:07:46.968 Get Log Page (02h): Supported 00:07:46.968 Delete I/O Completion Queue (04h): Supported 00:07:46.968 Create I/O Completion Queue (05h): Supported 00:07:46.968 Identify (06h): Supported 00:07:46.968 Abort (08h): Supported 00:07:46.968 Set Features (09h): Supported 00:07:46.968 Get Features (0Ah): Supported 00:07:46.968 Asynchronous Event Request (0Ch): Supported 00:07:46.968 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.968 Directive Send (19h): Supported 00:07:46.968 Directive Receive (1Ah): Supported 00:07:46.968 Virtualization Management (1Ch): Supported 00:07:46.968 Doorbell Buffer Config (7Ch): Supported 00:07:46.968 Format NVM (80h): Supported LBA-Change 00:07:46.968 I/O Commands 00:07:46.968 ------------ 00:07:46.968 Flush (00h): Supported LBA-Change 00:07:46.968 Write (01h): Supported LBA-Change 00:07:46.968 Read (02h): Supported 00:07:46.968 Compare (05h): Supported 00:07:46.968 Write Zeroes (08h): Supported LBA-Change 00:07:46.968 Dataset Management (09h): Supported LBA-Change 00:07:46.968 Unknown (0Ch): Supported 00:07:46.968 Unknown (12h): Supported 00:07:46.968 Copy (19h): Supported LBA-Change 00:07:46.968 Unknown (1Dh): Supported LBA-Change 00:07:46.968 00:07:46.968 Error Log 00:07:46.968 ========= 00:07:46.968 00:07:46.968 Arbitration 00:07:46.968 =========== 00:07:46.968 Arbitration Burst: no limit 00:07:46.968 00:07:46.968 Power Management 00:07:46.968 ================ 00:07:46.968 Number of Power States: 1 00:07:46.968 Current Power State: Power State #0 00:07:46.968 Power State #0: 00:07:46.968 Max Power: 25.00 W 00:07:46.968 Non-Operational State: Operational 00:07:46.968 Entry Latency: 16 microseconds 00:07:46.968 Exit Latency: 4 microseconds 00:07:46.968 Relative Read Throughput: 0 00:07:46.968 Relative Read Latency: 0 00:07:46.968 Relative Write Throughput: 0 00:07:46.968 Relative Write Latency: 0 00:07:46.968 Idle Power: Not Reported 00:07:46.968 Active Power: Not Reported 00:07:46.968 Non-Operational Permissive Mode: Not Supported 00:07:46.968 00:07:46.968 Health Information 00:07:46.968 ================== 00:07:46.968 Critical Warnings: 00:07:46.968 Available Spare Space: OK 00:07:46.968 Temperature: OK 00:07:46.968 Device Reliability: OK 00:07:46.968 Read Only: No 00:07:46.968 Volatile Memory Backup: OK 00:07:46.968 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.968 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.968 Available Spare: 0% 00:07:46.968 Available Spare Threshold: 0% 00:07:46.968 Life Percentage Used: 0% 00:07:46.968 Data Units Read: 998 00:07:46.968 Data Units Written: 863 00:07:46.968 Host Read Commands: 54795 00:07:46.968 Host Write Commands: 53555 00:07:46.968 Controller Busy Time: 0 minutes 00:07:46.969 Power Cycles: 0 00:07:46.969 Power On Hours: 0 hours 00:07:46.969 Unsafe Shutdowns: 0 00:07:46.969 Unrecoverable Media Errors: 0 00:07:46.969 Lifetime Error Log Entries: 0 00:07:46.969 Warning Temperature Time: 0 minutes 00:07:46.969 Critical Temperature Time: 0 minutes 00:07:46.969 00:07:46.969 Number of Queues 00:07:46.969 ================ 00:07:46.969 Number of I/O Submission Queues: 64 00:07:46.969 Number of I/O Completion Queues: 64 00:07:46.969 00:07:46.969 ZNS Specific Controller Data 00:07:46.969 ============================ 00:07:46.969 Zone Append Size Limit: 0 00:07:46.969 00:07:46.969 00:07:46.969 Active Namespaces 00:07:46.969 ================= 00:07:46.969 Namespace ID:1 00:07:46.969 Error Recovery Timeout: Unlimited 00:07:46.969 Command Set Identifier: NVM (00h) 00:07:46.969 Deallocate: Supported 00:07:46.969 Deallocated/Unwritten Error: Supported 00:07:46.969 Deallocated Read Value: All 0x00 00:07:46.969 Deallocate in Write Zeroes: Not Supported 00:07:46.969 Deallocated Guard Field: 0xFFFF 00:07:46.969 Flush: Supported 00:07:46.969 Reservation: Not Supported 00:07:46.969 Namespace Sharing Capabilities: Private 00:07:46.969 Size (in LBAs): 1310720 (5GiB) 00:07:46.969 Capacity (in LBAs): 1310720 (5GiB) 00:07:46.969 Utilization (in LBAs): 1310720 (5GiB) 00:07:46.969 Thin Provisioning: Not Supported 00:07:46.969 Per-NS Atomic Units: No 00:07:46.969 Maximum Single Source Range Length: 128 00:07:46.969 Maximum Copy Length: 128 00:07:46.969 Maximum Source Range Count: 128 00:07:46.969 NGUID/EUI64 Never Reused: No 00:07:46.969 Namespace Write Protected: No 00:07:46.969 Number of LBA Formats: 8 00:07:46.969 Current LBA Format: LBA Format #04 00:07:46.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.969 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.969 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.969 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.969 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.969 LBA Form[2024-11-07 09:36:14.598993] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62928 terminated unexpected 00:07:46.969 at #05: Data Size: 4096 Metadata Size: 8 00:07:46.969 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.969 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.969 00:07:46.969 NVM Specific Namespace Data 00:07:46.969 =========================== 00:07:46.969 Logical Block Storage Tag Mask: 0 00:07:46.969 Protection Information Capabilities: 00:07:46.969 16b Guard Protection Information Storage Tag Support: No 00:07:46.969 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.969 Storage Tag Check Read Support: No 00:07:46.969 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.969 ===================================================== 00:07:46.969 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.969 ===================================================== 00:07:46.969 Controller Capabilities/Features 00:07:46.969 ================================ 00:07:46.969 Vendor ID: 1b36 00:07:46.969 Subsystem Vendor ID: 1af4 00:07:46.969 Serial Number: 12342 00:07:46.969 Model Number: QEMU NVMe Ctrl 00:07:46.969 Firmware Version: 8.0.0 00:07:46.969 Recommended Arb Burst: 6 00:07:46.969 IEEE OUI Identifier: 00 54 52 00:07:46.969 Multi-path I/O 00:07:46.969 May have multiple subsystem ports: No 00:07:46.969 May have multiple controllers: No 00:07:46.969 Associated with SR-IOV VF: No 00:07:46.969 Max Data Transfer Size: 524288 00:07:46.969 Max Number of Namespaces: 256 00:07:46.969 Max Number of I/O Queues: 64 00:07:46.969 NVMe Specification Version (VS): 1.4 00:07:46.969 NVMe Specification Version (Identify): 1.4 00:07:46.969 Maximum Queue Entries: 2048 00:07:46.969 Contiguous Queues Required: Yes 00:07:46.969 Arbitration Mechanisms Supported 00:07:46.969 Weighted Round Robin: Not Supported 00:07:46.969 Vendor Specific: Not Supported 00:07:46.969 Reset Timeout: 7500 ms 00:07:46.969 Doorbell Stride: 4 bytes 00:07:46.969 NVM Subsystem Reset: Not Supported 00:07:46.969 Command Sets Supported 00:07:46.969 NVM Command Set: Supported 00:07:46.969 Boot Partition: Not Supported 00:07:46.969 Memory Page Size Minimum: 4096 bytes 00:07:46.969 Memory Page Size Maximum: 65536 bytes 00:07:46.969 Persistent Memory Region: Not Supported 00:07:46.969 Optional Asynchronous Events Supported 00:07:46.969 Namespace Attribute Notices: Supported 00:07:46.969 Firmware Activation Notices: Not Supported 00:07:46.969 ANA Change Notices: Not Supported 00:07:46.969 PLE Aggregate Log Change Notices: Not Supported 00:07:46.969 LBA Status Info Alert Notices: Not Supported 00:07:46.969 EGE Aggregate Log Change Notices: Not Supported 00:07:46.969 Normal NVM Subsystem Shutdown event: Not Supported 00:07:46.969 Zone Descriptor Change Notices: Not Supported 00:07:46.969 Discovery Log Change Notices: Not Supported 00:07:46.969 Controller Attributes 00:07:46.969 128-bit Host Identifier: Not Supported 00:07:46.969 Non-Operational Permissive Mode: Not Supported 00:07:46.969 NVM Sets: Not Supported 00:07:46.969 Read Recovery Levels: Not Supported 00:07:46.969 Endurance Groups: Not Supported 00:07:46.969 Predictable Latency Mode: Not Supported 00:07:46.969 Traffic Based Keep ALive: Not Supported 00:07:46.969 Namespace Granularity: Not Supported 00:07:46.969 SQ Associations: Not Supported 00:07:46.969 UUID List: Not Supported 00:07:46.969 Multi-Domain Subsystem: Not Supported 00:07:46.969 Fixed Capacity Management: Not Supported 00:07:46.969 Variable Capacity Management: Not Supported 00:07:46.969 Delete Endurance Group: Not Supported 00:07:46.969 Delete NVM Set: Not Supported 00:07:46.969 Extended LBA Formats Supported: Supported 00:07:46.969 Flexible Data Placement Supported: Not Supported 00:07:46.969 00:07:46.969 Controller Memory Buffer Support 00:07:46.969 ================================ 00:07:46.969 Supported: No 00:07:46.969 00:07:46.969 Persistent Memory Region Support 00:07:46.969 ================================ 00:07:46.969 Supported: No 00:07:46.969 00:07:46.969 Admin Command Set Attributes 00:07:46.969 ============================ 00:07:46.969 Security Send/Receive: Not Supported 00:07:46.969 Format NVM: Supported 00:07:46.969 Firmware Activate/Download: Not Supported 00:07:46.969 Namespace Management: Supported 00:07:46.969 Device Self-Test: Not Supported 00:07:46.969 Directives: Supported 00:07:46.969 NVMe-MI: Not Supported 00:07:46.969 Virtualization Management: Not Supported 00:07:46.969 Doorbell Buffer Config: Supported 00:07:46.969 Get LBA Status Capability: Not Supported 00:07:46.969 Command & Feature Lockdown Capability: Not Supported 00:07:46.969 Abort Command Limit: 4 00:07:46.969 Async Event Request Limit: 4 00:07:46.969 Number of Firmware Slots: N/A 00:07:46.969 Firmware Slot 1 Read-Only: N/A 00:07:46.969 Firmware Activation Without Reset: N/A 00:07:46.969 Multiple Update Detection Support: N/A 00:07:46.969 Firmware Update Granularity: No Information Provided 00:07:46.969 Per-Namespace SMART Log: Yes 00:07:46.969 Asymmetric Namespace Access Log Page: Not Supported 00:07:46.969 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:46.970 Command Effects Log Page: Supported 00:07:46.970 Get Log Page Extended Data: Supported 00:07:46.970 Telemetry Log Pages: Not Supported 00:07:46.970 Persistent Event Log Pages: Not Supported 00:07:46.970 Supported Log Pages Log Page: May Support 00:07:46.970 Commands Supported & Effects Log Page: Not Supported 00:07:46.970 Feature Identifiers & Effects Log Page:May Support 00:07:46.970 NVMe-MI Commands & Effects Log Page: May Support 00:07:46.970 Data Area 4 for Telemetry Log: Not Supported 00:07:46.970 Error Log Page Entries Supported: 1 00:07:46.970 Keep Alive: Not Supported 00:07:46.970 00:07:46.970 NVM Command Set Attributes 00:07:46.970 ========================== 00:07:46.970 Submission Queue Entry Size 00:07:46.970 Max: 64 00:07:46.970 Min: 64 00:07:46.970 Completion Queue Entry Size 00:07:46.970 Max: 16 00:07:46.970 Min: 16 00:07:46.970 Number of Namespaces: 256 00:07:46.970 Compare Command: Supported 00:07:46.970 Write Uncorrectable Command: Not Supported 00:07:46.970 Dataset Management Command: Supported 00:07:46.970 Write Zeroes Command: Supported 00:07:46.970 Set Features Save Field: Supported 00:07:46.970 Reservations: Not Supported 00:07:46.970 Timestamp: Supported 00:07:46.970 Copy: Supported 00:07:46.970 Volatile Write Cache: Present 00:07:46.970 Atomic Write Unit (Normal): 1 00:07:46.970 Atomic Write Unit (PFail): 1 00:07:46.970 Atomic Compare & Write Unit: 1 00:07:46.970 Fused Compare & Write: Not Supported 00:07:46.970 Scatter-Gather List 00:07:46.970 SGL Command Set: Supported 00:07:46.970 SGL Keyed: Not Supported 00:07:46.970 SGL Bit Bucket Descriptor: Not Supported 00:07:46.970 SGL Metadata Pointer: Not Supported 00:07:46.970 Oversized SGL: Not Supported 00:07:46.970 SGL Metadata Address: Not Supported 00:07:46.970 SGL Offset: Not Supported 00:07:46.970 Transport SGL Data Block: Not Supported 00:07:46.970 Replay Protected Memory Block: Not Supported 00:07:46.970 00:07:46.970 Firmware Slot Information 00:07:46.970 ========================= 00:07:46.970 Active slot: 1 00:07:46.970 Slot 1 Firmware Revision: 1.0 00:07:46.970 00:07:46.970 00:07:46.970 Commands Supported and Effects 00:07:46.970 ============================== 00:07:46.970 Admin Commands 00:07:46.970 -------------- 00:07:46.970 Delete I/O Submission Queue (00h): Supported 00:07:46.970 Create I/O Submission Queue (01h): Supported 00:07:46.970 Get Log Page (02h): Supported 00:07:46.970 Delete I/O Completion Queue (04h): Supported 00:07:46.970 Create I/O Completion Queue (05h): Supported 00:07:46.970 Identify (06h): Supported 00:07:46.970 Abort (08h): Supported 00:07:46.970 Set Features (09h): Supported 00:07:46.970 Get Features (0Ah): Supported 00:07:46.970 Asynchronous Event Request (0Ch): Supported 00:07:46.970 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:46.970 Directive Send (19h): Supported 00:07:46.970 Directive Receive (1Ah): Supported 00:07:46.970 Virtualization Management (1Ch): Supported 00:07:46.970 Doorbell Buffer Config (7Ch): Supported 00:07:46.970 Format NVM (80h): Supported LBA-Change 00:07:46.970 I/O Commands 00:07:46.970 ------------ 00:07:46.970 Flush (00h): Supported LBA-Change 00:07:46.970 Write (01h): Supported LBA-Change 00:07:46.970 Read (02h): Supported 00:07:46.970 Compare (05h): Supported 00:07:46.970 Write Zeroes (08h): Supported LBA-Change 00:07:46.970 Dataset Management (09h): Supported LBA-Change 00:07:46.970 Unknown (0Ch): Supported 00:07:46.970 Unknown (12h): Supported 00:07:46.970 Copy (19h): Supported LBA-Change 00:07:46.970 Unknown (1Dh): Supported LBA-Change 00:07:46.970 00:07:46.970 Error Log 00:07:46.970 ========= 00:07:46.970 00:07:46.970 Arbitration 00:07:46.970 =========== 00:07:46.970 Arbitration Burst: no limit 00:07:46.970 00:07:46.970 Power Management 00:07:46.970 ================ 00:07:46.970 Number of Power States: 1 00:07:46.970 Current Power State: Power State #0 00:07:46.970 Power State #0: 00:07:46.970 Max Power: 25.00 W 00:07:46.970 Non-Operational State: Operational 00:07:46.970 Entry Latency: 16 microseconds 00:07:46.970 Exit Latency: 4 microseconds 00:07:46.970 Relative Read Throughput: 0 00:07:46.970 Relative Read Latency: 0 00:07:46.970 Relative Write Throughput: 0 00:07:46.970 Relative Write Latency: 0 00:07:46.970 Idle Power: Not Reported 00:07:46.970 Active Power: Not Reported 00:07:46.970 Non-Operational Permissive Mode: Not Supported 00:07:46.970 00:07:46.970 Health Information 00:07:46.970 ================== 00:07:46.970 Critical Warnings: 00:07:46.970 Available Spare Space: OK 00:07:46.970 Temperature: OK 00:07:46.970 Device Reliability: OK 00:07:46.970 Read Only: No 00:07:46.970 Volatile Memory Backup: OK 00:07:46.970 Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.970 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:46.970 Available Spare: 0% 00:07:46.970 Available Spare Threshold: 0% 00:07:46.970 Life Percentage Used: 0% 00:07:46.970 Data Units Read: 2295 00:07:46.970 Data Units Written: 2082 00:07:46.970 Host Read Commands: 118429 00:07:46.970 Host Write Commands: 116700 00:07:46.970 Controller Busy Time: 0 minutes 00:07:46.970 Power Cycles: 0 00:07:46.970 Power On Hours: 0 hours 00:07:46.970 Unsafe Shutdowns: 0 00:07:46.970 Unrecoverable Media Errors: 0 00:07:46.970 Lifetime Error Log Entries: 0 00:07:46.970 Warning Temperature Time: 0 minutes 00:07:46.970 Critical Temperature Time: 0 minutes 00:07:46.970 00:07:46.970 Number of Queues 00:07:46.970 ================ 00:07:46.970 Number of I/O Submission Queues: 64 00:07:46.970 Number of I/O Completion Queues: 64 00:07:46.970 00:07:46.970 ZNS Specific Controller Data 00:07:46.970 ============================ 00:07:46.970 Zone Append Size Limit: 0 00:07:46.970 00:07:46.970 00:07:46.970 Active Namespaces 00:07:46.970 ================= 00:07:46.970 Namespace ID:1 00:07:46.970 Error Recovery Timeout: Unlimited 00:07:46.970 Command Set Identifier: NVM (00h) 00:07:46.970 Deallocate: Supported 00:07:46.970 Deallocated/Unwritten Error: Supported 00:07:46.970 Deallocated Read Value: All 0x00 00:07:46.970 Deallocate in Write Zeroes: Not Supported 00:07:46.970 Deallocated Guard Field: 0xFFFF 00:07:46.970 Flush: Supported 00:07:46.970 Reservation: Not Supported 00:07:46.970 Namespace Sharing Capabilities: Private 00:07:46.970 Size (in LBAs): 1048576 (4GiB) 00:07:46.970 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.970 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.970 Thin Provisioning: Not Supported 00:07:46.970 Per-NS Atomic Units: No 00:07:46.970 Maximum Single Source Range Length: 128 00:07:46.970 Maximum Copy Length: 128 00:07:46.970 Maximum Source Range Count: 128 00:07:46.970 NGUID/EUI64 Never Reused: No 00:07:46.970 Namespace Write Protected: No 00:07:46.970 Number of LBA Formats: 8 00:07:46.970 Current LBA Format: LBA Format #04 00:07:46.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.970 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.970 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.970 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.970 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.970 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.970 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.970 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.970 00:07:46.970 NVM Specific Namespace Data 00:07:46.970 =========================== 00:07:46.970 Logical Block Storage Tag Mask: 0 00:07:46.970 Protection Information Capabilities: 00:07:46.970 16b Guard Protection Information Storage Tag Support: No 00:07:46.970 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.970 Storage Tag Check Read Support: No 00:07:46.970 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.970 Namespace ID:2 00:07:46.970 Error Recovery Timeout: Unlimited 00:07:46.970 Command Set Identifier: NVM (00h) 00:07:46.970 Deallocate: Supported 00:07:46.970 Deallocated/Unwritten Error: Supported 00:07:46.970 Deallocated Read Value: All 0x00 00:07:46.970 Deallocate in Write Zeroes: Not Supported 00:07:46.970 Deallocated Guard Field: 0xFFFF 00:07:46.970 Flush: Supported 00:07:46.970 Reservation: Not Supported 00:07:46.970 Namespace Sharing Capabilities: Private 00:07:46.970 Size (in LBAs): 1048576 (4GiB) 00:07:46.970 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.970 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.970 Thin Provisioning: Not Supported 00:07:46.970 Per-NS Atomic Units: No 00:07:46.971 Maximum Single Source Range Length: 128 00:07:46.971 Maximum Copy Length: 128 00:07:46.971 Maximum Source Range Count: 128 00:07:46.971 NGUID/EUI64 Never Reused: No 00:07:46.971 Namespace Write Protected: No 00:07:46.971 Number of LBA Formats: 8 00:07:46.971 Current LBA Format: LBA Format #04 00:07:46.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.971 00:07:46.971 NVM Specific Namespace Data 00:07:46.971 =========================== 00:07:46.971 Logical Block Storage Tag Mask: 0 00:07:46.971 Protection Information Capabilities: 00:07:46.971 16b Guard Protection Information Storage Tag Support: No 00:07:46.971 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.971 Storage Tag Check Read Support: No 00:07:46.971 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Namespace ID:3 00:07:46.971 Error Recovery Timeout: Unlimited 00:07:46.971 Command Set Identifier: NVM (00h) 00:07:46.971 Deallocate: Supported 00:07:46.971 Deallocated/Unwritten Error: Supported 00:07:46.971 Deallocated Read Value: All 0x00 00:07:46.971 Deallocate in Write Zeroes: Not Supported 00:07:46.971 Deallocated Guard Field: 0xFFFF 00:07:46.971 Flush: Supported 00:07:46.971 Reservation: Not Supported 00:07:46.971 Namespace Sharing Capabilities: Private 00:07:46.971 Size (in LBAs): 1048576 (4GiB) 00:07:46.971 Capacity (in LBAs): 1048576 (4GiB) 00:07:46.971 Utilization (in LBAs): 1048576 (4GiB) 00:07:46.971 Thin Provisioning: Not Supported 00:07:46.971 Per-NS Atomic Units: No 00:07:46.971 Maximum Single Source Range Length: 128 00:07:46.971 Maximum Copy Length: 128 00:07:46.971 Maximum Source Range Count: 128 00:07:46.971 NGUID/EUI64 Never Reused: No 00:07:46.971 Namespace Write Protected: No 00:07:46.971 Number of LBA Formats: 8 00:07:46.971 Current LBA Format: LBA Format #04 00:07:46.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:46.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:46.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:46.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:46.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:46.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:46.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:46.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:46.971 00:07:46.971 NVM Specific Namespace Data 00:07:46.971 =========================== 00:07:46.971 Logical Block Storage Tag Mask: 0 00:07:46.971 Protection Information Capabilities: 00:07:46.971 16b Guard Protection Information Storage Tag Support: No 00:07:46.971 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:46.971 Storage Tag Check Read Support: No 00:07:46.971 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:46.971 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.233 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.233 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:47.233 ===================================================== 00:07:47.233 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.233 ===================================================== 00:07:47.233 Controller Capabilities/Features 00:07:47.233 ================================ 00:07:47.233 Vendor ID: 1b36 00:07:47.233 Subsystem Vendor ID: 1af4 00:07:47.233 Serial Number: 12340 00:07:47.233 Model Number: QEMU NVMe Ctrl 00:07:47.233 Firmware Version: 8.0.0 00:07:47.233 Recommended Arb Burst: 6 00:07:47.233 IEEE OUI Identifier: 00 54 52 00:07:47.233 Multi-path I/O 00:07:47.233 May have multiple subsystem ports: No 00:07:47.233 May have multiple controllers: No 00:07:47.233 Associated with SR-IOV VF: No 00:07:47.233 Max Data Transfer Size: 524288 00:07:47.233 Max Number of Namespaces: 256 00:07:47.233 Max Number of I/O Queues: 64 00:07:47.233 NVMe Specification Version (VS): 1.4 00:07:47.233 NVMe Specification Version (Identify): 1.4 00:07:47.233 Maximum Queue Entries: 2048 00:07:47.233 Contiguous Queues Required: Yes 00:07:47.233 Arbitration Mechanisms Supported 00:07:47.233 Weighted Round Robin: Not Supported 00:07:47.233 Vendor Specific: Not Supported 00:07:47.233 Reset Timeout: 7500 ms 00:07:47.233 Doorbell Stride: 4 bytes 00:07:47.233 NVM Subsystem Reset: Not Supported 00:07:47.233 Command Sets Supported 00:07:47.233 NVM Command Set: Supported 00:07:47.233 Boot Partition: Not Supported 00:07:47.233 Memory Page Size Minimum: 4096 bytes 00:07:47.233 Memory Page Size Maximum: 65536 bytes 00:07:47.233 Persistent Memory Region: Not Supported 00:07:47.233 Optional Asynchronous Events Supported 00:07:47.233 Namespace Attribute Notices: Supported 00:07:47.233 Firmware Activation Notices: Not Supported 00:07:47.233 ANA Change Notices: Not Supported 00:07:47.233 PLE Aggregate Log Change Notices: Not Supported 00:07:47.233 LBA Status Info Alert Notices: Not Supported 00:07:47.233 EGE Aggregate Log Change Notices: Not Supported 00:07:47.233 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.233 Zone Descriptor Change Notices: Not Supported 00:07:47.233 Discovery Log Change Notices: Not Supported 00:07:47.233 Controller Attributes 00:07:47.233 128-bit Host Identifier: Not Supported 00:07:47.233 Non-Operational Permissive Mode: Not Supported 00:07:47.233 NVM Sets: Not Supported 00:07:47.233 Read Recovery Levels: Not Supported 00:07:47.233 Endurance Groups: Not Supported 00:07:47.233 Predictable Latency Mode: Not Supported 00:07:47.233 Traffic Based Keep ALive: Not Supported 00:07:47.233 Namespace Granularity: Not Supported 00:07:47.233 SQ Associations: Not Supported 00:07:47.233 UUID List: Not Supported 00:07:47.233 Multi-Domain Subsystem: Not Supported 00:07:47.233 Fixed Capacity Management: Not Supported 00:07:47.233 Variable Capacity Management: Not Supported 00:07:47.233 Delete Endurance Group: Not Supported 00:07:47.233 Delete NVM Set: Not Supported 00:07:47.233 Extended LBA Formats Supported: Supported 00:07:47.233 Flexible Data Placement Supported: Not Supported 00:07:47.233 00:07:47.233 Controller Memory Buffer Support 00:07:47.233 ================================ 00:07:47.233 Supported: No 00:07:47.233 00:07:47.233 Persistent Memory Region Support 00:07:47.233 ================================ 00:07:47.233 Supported: No 00:07:47.233 00:07:47.233 Admin Command Set Attributes 00:07:47.233 ============================ 00:07:47.233 Security Send/Receive: Not Supported 00:07:47.233 Format NVM: Supported 00:07:47.233 Firmware Activate/Download: Not Supported 00:07:47.233 Namespace Management: Supported 00:07:47.233 Device Self-Test: Not Supported 00:07:47.233 Directives: Supported 00:07:47.233 NVMe-MI: Not Supported 00:07:47.233 Virtualization Management: Not Supported 00:07:47.233 Doorbell Buffer Config: Supported 00:07:47.233 Get LBA Status Capability: Not Supported 00:07:47.233 Command & Feature Lockdown Capability: Not Supported 00:07:47.233 Abort Command Limit: 4 00:07:47.233 Async Event Request Limit: 4 00:07:47.233 Number of Firmware Slots: N/A 00:07:47.233 Firmware Slot 1 Read-Only: N/A 00:07:47.233 Firmware Activation Without Reset: N/A 00:07:47.233 Multiple Update Detection Support: N/A 00:07:47.233 Firmware Update Granularity: No Information Provided 00:07:47.233 Per-Namespace SMART Log: Yes 00:07:47.233 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.233 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:47.233 Command Effects Log Page: Supported 00:07:47.233 Get Log Page Extended Data: Supported 00:07:47.233 Telemetry Log Pages: Not Supported 00:07:47.233 Persistent Event Log Pages: Not Supported 00:07:47.233 Supported Log Pages Log Page: May Support 00:07:47.233 Commands Supported & Effects Log Page: Not Supported 00:07:47.233 Feature Identifiers & Effects Log Page:May Support 00:07:47.233 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.233 Data Area 4 for Telemetry Log: Not Supported 00:07:47.233 Error Log Page Entries Supported: 1 00:07:47.233 Keep Alive: Not Supported 00:07:47.233 00:07:47.233 NVM Command Set Attributes 00:07:47.233 ========================== 00:07:47.233 Submission Queue Entry Size 00:07:47.233 Max: 64 00:07:47.233 Min: 64 00:07:47.233 Completion Queue Entry Size 00:07:47.233 Max: 16 00:07:47.233 Min: 16 00:07:47.233 Number of Namespaces: 256 00:07:47.233 Compare Command: Supported 00:07:47.233 Write Uncorrectable Command: Not Supported 00:07:47.233 Dataset Management Command: Supported 00:07:47.233 Write Zeroes Command: Supported 00:07:47.233 Set Features Save Field: Supported 00:07:47.233 Reservations: Not Supported 00:07:47.233 Timestamp: Supported 00:07:47.233 Copy: Supported 00:07:47.233 Volatile Write Cache: Present 00:07:47.233 Atomic Write Unit (Normal): 1 00:07:47.233 Atomic Write Unit (PFail): 1 00:07:47.233 Atomic Compare & Write Unit: 1 00:07:47.233 Fused Compare & Write: Not Supported 00:07:47.233 Scatter-Gather List 00:07:47.233 SGL Command Set: Supported 00:07:47.233 SGL Keyed: Not Supported 00:07:47.233 SGL Bit Bucket Descriptor: Not Supported 00:07:47.233 SGL Metadata Pointer: Not Supported 00:07:47.233 Oversized SGL: Not Supported 00:07:47.233 SGL Metadata Address: Not Supported 00:07:47.233 SGL Offset: Not Supported 00:07:47.233 Transport SGL Data Block: Not Supported 00:07:47.233 Replay Protected Memory Block: Not Supported 00:07:47.233 00:07:47.233 Firmware Slot Information 00:07:47.233 ========================= 00:07:47.233 Active slot: 1 00:07:47.233 Slot 1 Firmware Revision: 1.0 00:07:47.233 00:07:47.233 00:07:47.233 Commands Supported and Effects 00:07:47.233 ============================== 00:07:47.233 Admin Commands 00:07:47.233 -------------- 00:07:47.233 Delete I/O Submission Queue (00h): Supported 00:07:47.233 Create I/O Submission Queue (01h): Supported 00:07:47.233 Get Log Page (02h): Supported 00:07:47.233 Delete I/O Completion Queue (04h): Supported 00:07:47.233 Create I/O Completion Queue (05h): Supported 00:07:47.233 Identify (06h): Supported 00:07:47.233 Abort (08h): Supported 00:07:47.233 Set Features (09h): Supported 00:07:47.233 Get Features (0Ah): Supported 00:07:47.233 Asynchronous Event Request (0Ch): Supported 00:07:47.233 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.233 Directive Send (19h): Supported 00:07:47.233 Directive Receive (1Ah): Supported 00:07:47.233 Virtualization Management (1Ch): Supported 00:07:47.233 Doorbell Buffer Config (7Ch): Supported 00:07:47.233 Format NVM (80h): Supported LBA-Change 00:07:47.233 I/O Commands 00:07:47.233 ------------ 00:07:47.234 Flush (00h): Supported LBA-Change 00:07:47.234 Write (01h): Supported LBA-Change 00:07:47.234 Read (02h): Supported 00:07:47.234 Compare (05h): Supported 00:07:47.234 Write Zeroes (08h): Supported LBA-Change 00:07:47.234 Dataset Management (09h): Supported LBA-Change 00:07:47.234 Unknown (0Ch): Supported 00:07:47.234 Unknown (12h): Supported 00:07:47.234 Copy (19h): Supported LBA-Change 00:07:47.234 Unknown (1Dh): Supported LBA-Change 00:07:47.234 00:07:47.234 Error Log 00:07:47.234 ========= 00:07:47.234 00:07:47.234 Arbitration 00:07:47.234 =========== 00:07:47.234 Arbitration Burst: no limit 00:07:47.234 00:07:47.234 Power Management 00:07:47.234 ================ 00:07:47.234 Number of Power States: 1 00:07:47.234 Current Power State: Power State #0 00:07:47.234 Power State #0: 00:07:47.234 Max Power: 25.00 W 00:07:47.234 Non-Operational State: Operational 00:07:47.234 Entry Latency: 16 microseconds 00:07:47.234 Exit Latency: 4 microseconds 00:07:47.234 Relative Read Throughput: 0 00:07:47.234 Relative Read Latency: 0 00:07:47.234 Relative Write Throughput: 0 00:07:47.234 Relative Write Latency: 0 00:07:47.234 Idle Power: Not Reported 00:07:47.234 Active Power: Not Reported 00:07:47.234 Non-Operational Permissive Mode: Not Supported 00:07:47.234 00:07:47.234 Health Information 00:07:47.234 ================== 00:07:47.234 Critical Warnings: 00:07:47.234 Available Spare Space: OK 00:07:47.234 Temperature: OK 00:07:47.234 Device Reliability: OK 00:07:47.234 Read Only: No 00:07:47.234 Volatile Memory Backup: OK 00:07:47.234 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.234 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.234 Available Spare: 0% 00:07:47.234 Available Spare Threshold: 0% 00:07:47.234 Life Percentage Used: 0% 00:07:47.234 Data Units Read: 673 00:07:47.234 Data Units Written: 601 00:07:47.234 Host Read Commands: 38519 00:07:47.234 Host Write Commands: 38305 00:07:47.234 Controller Busy Time: 0 minutes 00:07:47.234 Power Cycles: 0 00:07:47.234 Power On Hours: 0 hours 00:07:47.234 Unsafe Shutdowns: 0 00:07:47.234 Unrecoverable Media Errors: 0 00:07:47.234 Lifetime Error Log Entries: 0 00:07:47.234 Warning Temperature Time: 0 minutes 00:07:47.234 Critical Temperature Time: 0 minutes 00:07:47.234 00:07:47.234 Number of Queues 00:07:47.234 ================ 00:07:47.234 Number of I/O Submission Queues: 64 00:07:47.234 Number of I/O Completion Queues: 64 00:07:47.234 00:07:47.234 ZNS Specific Controller Data 00:07:47.234 ============================ 00:07:47.234 Zone Append Size Limit: 0 00:07:47.234 00:07:47.234 00:07:47.234 Active Namespaces 00:07:47.234 ================= 00:07:47.234 Namespace ID:1 00:07:47.234 Error Recovery Timeout: Unlimited 00:07:47.234 Command Set Identifier: NVM (00h) 00:07:47.234 Deallocate: Supported 00:07:47.234 Deallocated/Unwritten Error: Supported 00:07:47.234 Deallocated Read Value: All 0x00 00:07:47.234 Deallocate in Write Zeroes: Not Supported 00:07:47.234 Deallocated Guard Field: 0xFFFF 00:07:47.234 Flush: Supported 00:07:47.234 Reservation: Not Supported 00:07:47.234 Metadata Transferred as: Separate Metadata Buffer 00:07:47.234 Namespace Sharing Capabilities: Private 00:07:47.234 Size (in LBAs): 1548666 (5GiB) 00:07:47.234 Capacity (in LBAs): 1548666 (5GiB) 00:07:47.234 Utilization (in LBAs): 1548666 (5GiB) 00:07:47.234 Thin Provisioning: Not Supported 00:07:47.234 Per-NS Atomic Units: No 00:07:47.234 Maximum Single Source Range Length: 128 00:07:47.234 Maximum Copy Length: 128 00:07:47.234 Maximum Source Range Count: 128 00:07:47.234 NGUID/EUI64 Never Reused: No 00:07:47.234 Namespace Write Protected: No 00:07:47.234 Number of LBA Formats: 8 00:07:47.234 Current LBA Format: LBA Format #07 00:07:47.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.234 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.234 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.234 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.234 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.234 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.234 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.234 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.234 00:07:47.234 NVM Specific Namespace Data 00:07:47.234 =========================== 00:07:47.234 Logical Block Storage Tag Mask: 0 00:07:47.234 Protection Information Capabilities: 00:07:47.234 16b Guard Protection Information Storage Tag Support: No 00:07:47.234 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.234 Storage Tag Check Read Support: No 00:07:47.234 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.234 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.234 09:36:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:47.496 ===================================================== 00:07:47.496 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.496 ===================================================== 00:07:47.496 Controller Capabilities/Features 00:07:47.496 ================================ 00:07:47.496 Vendor ID: 1b36 00:07:47.496 Subsystem Vendor ID: 1af4 00:07:47.496 Serial Number: 12341 00:07:47.496 Model Number: QEMU NVMe Ctrl 00:07:47.496 Firmware Version: 8.0.0 00:07:47.496 Recommended Arb Burst: 6 00:07:47.496 IEEE OUI Identifier: 00 54 52 00:07:47.496 Multi-path I/O 00:07:47.496 May have multiple subsystem ports: No 00:07:47.496 May have multiple controllers: No 00:07:47.496 Associated with SR-IOV VF: No 00:07:47.496 Max Data Transfer Size: 524288 00:07:47.496 Max Number of Namespaces: 256 00:07:47.496 Max Number of I/O Queues: 64 00:07:47.496 NVMe Specification Version (VS): 1.4 00:07:47.496 NVMe Specification Version (Identify): 1.4 00:07:47.496 Maximum Queue Entries: 2048 00:07:47.496 Contiguous Queues Required: Yes 00:07:47.496 Arbitration Mechanisms Supported 00:07:47.496 Weighted Round Robin: Not Supported 00:07:47.496 Vendor Specific: Not Supported 00:07:47.496 Reset Timeout: 7500 ms 00:07:47.496 Doorbell Stride: 4 bytes 00:07:47.496 NVM Subsystem Reset: Not Supported 00:07:47.496 Command Sets Supported 00:07:47.496 NVM Command Set: Supported 00:07:47.496 Boot Partition: Not Supported 00:07:47.496 Memory Page Size Minimum: 4096 bytes 00:07:47.496 Memory Page Size Maximum: 65536 bytes 00:07:47.496 Persistent Memory Region: Not Supported 00:07:47.496 Optional Asynchronous Events Supported 00:07:47.496 Namespace Attribute Notices: Supported 00:07:47.496 Firmware Activation Notices: Not Supported 00:07:47.496 ANA Change Notices: Not Supported 00:07:47.496 PLE Aggregate Log Change Notices: Not Supported 00:07:47.496 LBA Status Info Alert Notices: Not Supported 00:07:47.496 EGE Aggregate Log Change Notices: Not Supported 00:07:47.496 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.496 Zone Descriptor Change Notices: Not Supported 00:07:47.496 Discovery Log Change Notices: Not Supported 00:07:47.496 Controller Attributes 00:07:47.496 128-bit Host Identifier: Not Supported 00:07:47.496 Non-Operational Permissive Mode: Not Supported 00:07:47.496 NVM Sets: Not Supported 00:07:47.496 Read Recovery Levels: Not Supported 00:07:47.496 Endurance Groups: Not Supported 00:07:47.496 Predictable Latency Mode: Not Supported 00:07:47.496 Traffic Based Keep ALive: Not Supported 00:07:47.496 Namespace Granularity: Not Supported 00:07:47.496 SQ Associations: Not Supported 00:07:47.496 UUID List: Not Supported 00:07:47.496 Multi-Domain Subsystem: Not Supported 00:07:47.496 Fixed Capacity Management: Not Supported 00:07:47.496 Variable Capacity Management: Not Supported 00:07:47.496 Delete Endurance Group: Not Supported 00:07:47.496 Delete NVM Set: Not Supported 00:07:47.496 Extended LBA Formats Supported: Supported 00:07:47.496 Flexible Data Placement Supported: Not Supported 00:07:47.496 00:07:47.496 Controller Memory Buffer Support 00:07:47.496 ================================ 00:07:47.496 Supported: No 00:07:47.496 00:07:47.496 Persistent Memory Region Support 00:07:47.496 ================================ 00:07:47.496 Supported: No 00:07:47.496 00:07:47.496 Admin Command Set Attributes 00:07:47.496 ============================ 00:07:47.496 Security Send/Receive: Not Supported 00:07:47.496 Format NVM: Supported 00:07:47.496 Firmware Activate/Download: Not Supported 00:07:47.496 Namespace Management: Supported 00:07:47.496 Device Self-Test: Not Supported 00:07:47.496 Directives: Supported 00:07:47.496 NVMe-MI: Not Supported 00:07:47.496 Virtualization Management: Not Supported 00:07:47.496 Doorbell Buffer Config: Supported 00:07:47.496 Get LBA Status Capability: Not Supported 00:07:47.496 Command & Feature Lockdown Capability: Not Supported 00:07:47.496 Abort Command Limit: 4 00:07:47.496 Async Event Request Limit: 4 00:07:47.496 Number of Firmware Slots: N/A 00:07:47.496 Firmware Slot 1 Read-Only: N/A 00:07:47.496 Firmware Activation Without Reset: N/A 00:07:47.496 Multiple Update Detection Support: N/A 00:07:47.496 Firmware Update Granularity: No Information Provided 00:07:47.496 Per-Namespace SMART Log: Yes 00:07:47.496 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.496 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:47.496 Command Effects Log Page: Supported 00:07:47.496 Get Log Page Extended Data: Supported 00:07:47.496 Telemetry Log Pages: Not Supported 00:07:47.496 Persistent Event Log Pages: Not Supported 00:07:47.496 Supported Log Pages Log Page: May Support 00:07:47.496 Commands Supported & Effects Log Page: Not Supported 00:07:47.496 Feature Identifiers & Effects Log Page:May Support 00:07:47.496 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.496 Data Area 4 for Telemetry Log: Not Supported 00:07:47.496 Error Log Page Entries Supported: 1 00:07:47.496 Keep Alive: Not Supported 00:07:47.496 00:07:47.496 NVM Command Set Attributes 00:07:47.496 ========================== 00:07:47.496 Submission Queue Entry Size 00:07:47.496 Max: 64 00:07:47.496 Min: 64 00:07:47.496 Completion Queue Entry Size 00:07:47.496 Max: 16 00:07:47.496 Min: 16 00:07:47.496 Number of Namespaces: 256 00:07:47.496 Compare Command: Supported 00:07:47.496 Write Uncorrectable Command: Not Supported 00:07:47.496 Dataset Management Command: Supported 00:07:47.496 Write Zeroes Command: Supported 00:07:47.496 Set Features Save Field: Supported 00:07:47.496 Reservations: Not Supported 00:07:47.496 Timestamp: Supported 00:07:47.496 Copy: Supported 00:07:47.496 Volatile Write Cache: Present 00:07:47.496 Atomic Write Unit (Normal): 1 00:07:47.496 Atomic Write Unit (PFail): 1 00:07:47.496 Atomic Compare & Write Unit: 1 00:07:47.496 Fused Compare & Write: Not Supported 00:07:47.496 Scatter-Gather List 00:07:47.496 SGL Command Set: Supported 00:07:47.496 SGL Keyed: Not Supported 00:07:47.496 SGL Bit Bucket Descriptor: Not Supported 00:07:47.496 SGL Metadata Pointer: Not Supported 00:07:47.496 Oversized SGL: Not Supported 00:07:47.496 SGL Metadata Address: Not Supported 00:07:47.496 SGL Offset: Not Supported 00:07:47.496 Transport SGL Data Block: Not Supported 00:07:47.496 Replay Protected Memory Block: Not Supported 00:07:47.496 00:07:47.496 Firmware Slot Information 00:07:47.496 ========================= 00:07:47.496 Active slot: 1 00:07:47.496 Slot 1 Firmware Revision: 1.0 00:07:47.496 00:07:47.496 00:07:47.496 Commands Supported and Effects 00:07:47.496 ============================== 00:07:47.496 Admin Commands 00:07:47.496 -------------- 00:07:47.496 Delete I/O Submission Queue (00h): Supported 00:07:47.496 Create I/O Submission Queue (01h): Supported 00:07:47.496 Get Log Page (02h): Supported 00:07:47.496 Delete I/O Completion Queue (04h): Supported 00:07:47.496 Create I/O Completion Queue (05h): Supported 00:07:47.497 Identify (06h): Supported 00:07:47.497 Abort (08h): Supported 00:07:47.497 Set Features (09h): Supported 00:07:47.497 Get Features (0Ah): Supported 00:07:47.497 Asynchronous Event Request (0Ch): Supported 00:07:47.497 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.497 Directive Send (19h): Supported 00:07:47.497 Directive Receive (1Ah): Supported 00:07:47.497 Virtualization Management (1Ch): Supported 00:07:47.497 Doorbell Buffer Config (7Ch): Supported 00:07:47.497 Format NVM (80h): Supported LBA-Change 00:07:47.497 I/O Commands 00:07:47.497 ------------ 00:07:47.497 Flush (00h): Supported LBA-Change 00:07:47.497 Write (01h): Supported LBA-Change 00:07:47.497 Read (02h): Supported 00:07:47.497 Compare (05h): Supported 00:07:47.497 Write Zeroes (08h): Supported LBA-Change 00:07:47.497 Dataset Management (09h): Supported LBA-Change 00:07:47.497 Unknown (0Ch): Supported 00:07:47.497 Unknown (12h): Supported 00:07:47.497 Copy (19h): Supported LBA-Change 00:07:47.497 Unknown (1Dh): Supported LBA-Change 00:07:47.497 00:07:47.497 Error Log 00:07:47.497 ========= 00:07:47.497 00:07:47.497 Arbitration 00:07:47.497 =========== 00:07:47.497 Arbitration Burst: no limit 00:07:47.497 00:07:47.497 Power Management 00:07:47.497 ================ 00:07:47.497 Number of Power States: 1 00:07:47.497 Current Power State: Power State #0 00:07:47.497 Power State #0: 00:07:47.497 Max Power: 25.00 W 00:07:47.497 Non-Operational State: Operational 00:07:47.497 Entry Latency: 16 microseconds 00:07:47.497 Exit Latency: 4 microseconds 00:07:47.497 Relative Read Throughput: 0 00:07:47.497 Relative Read Latency: 0 00:07:47.497 Relative Write Throughput: 0 00:07:47.497 Relative Write Latency: 0 00:07:47.497 Idle Power: Not Reported 00:07:47.497 Active Power: Not Reported 00:07:47.497 Non-Operational Permissive Mode: Not Supported 00:07:47.497 00:07:47.497 Health Information 00:07:47.497 ================== 00:07:47.497 Critical Warnings: 00:07:47.497 Available Spare Space: OK 00:07:47.497 Temperature: OK 00:07:47.497 Device Reliability: OK 00:07:47.497 Read Only: No 00:07:47.497 Volatile Memory Backup: OK 00:07:47.497 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.497 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.497 Available Spare: 0% 00:07:47.497 Available Spare Threshold: 0% 00:07:47.497 Life Percentage Used: 0% 00:07:47.497 Data Units Read: 998 00:07:47.497 Data Units Written: 863 00:07:47.497 Host Read Commands: 54795 00:07:47.497 Host Write Commands: 53555 00:07:47.497 Controller Busy Time: 0 minutes 00:07:47.497 Power Cycles: 0 00:07:47.497 Power On Hours: 0 hours 00:07:47.497 Unsafe Shutdowns: 0 00:07:47.497 Unrecoverable Media Errors: 0 00:07:47.497 Lifetime Error Log Entries: 0 00:07:47.497 Warning Temperature Time: 0 minutes 00:07:47.497 Critical Temperature Time: 0 minutes 00:07:47.497 00:07:47.497 Number of Queues 00:07:47.497 ================ 00:07:47.497 Number of I/O Submission Queues: 64 00:07:47.497 Number of I/O Completion Queues: 64 00:07:47.497 00:07:47.497 ZNS Specific Controller Data 00:07:47.497 ============================ 00:07:47.497 Zone Append Size Limit: 0 00:07:47.497 00:07:47.497 00:07:47.497 Active Namespaces 00:07:47.497 ================= 00:07:47.497 Namespace ID:1 00:07:47.497 Error Recovery Timeout: Unlimited 00:07:47.497 Command Set Identifier: NVM (00h) 00:07:47.497 Deallocate: Supported 00:07:47.497 Deallocated/Unwritten Error: Supported 00:07:47.497 Deallocated Read Value: All 0x00 00:07:47.497 Deallocate in Write Zeroes: Not Supported 00:07:47.497 Deallocated Guard Field: 0xFFFF 00:07:47.497 Flush: Supported 00:07:47.497 Reservation: Not Supported 00:07:47.497 Namespace Sharing Capabilities: Private 00:07:47.497 Size (in LBAs): 1310720 (5GiB) 00:07:47.497 Capacity (in LBAs): 1310720 (5GiB) 00:07:47.497 Utilization (in LBAs): 1310720 (5GiB) 00:07:47.497 Thin Provisioning: Not Supported 00:07:47.497 Per-NS Atomic Units: No 00:07:47.497 Maximum Single Source Range Length: 128 00:07:47.497 Maximum Copy Length: 128 00:07:47.497 Maximum Source Range Count: 128 00:07:47.497 NGUID/EUI64 Never Reused: No 00:07:47.497 Namespace Write Protected: No 00:07:47.497 Number of LBA Formats: 8 00:07:47.497 Current LBA Format: LBA Format #04 00:07:47.497 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.497 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.497 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.497 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.497 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.497 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.497 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.497 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.497 00:07:47.497 NVM Specific Namespace Data 00:07:47.497 =========================== 00:07:47.497 Logical Block Storage Tag Mask: 0 00:07:47.497 Protection Information Capabilities: 00:07:47.497 16b Guard Protection Information Storage Tag Support: No 00:07:47.497 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.497 Storage Tag Check Read Support: No 00:07:47.497 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.497 09:36:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.497 09:36:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:47.759 ===================================================== 00:07:47.759 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.759 ===================================================== 00:07:47.759 Controller Capabilities/Features 00:07:47.759 ================================ 00:07:47.759 Vendor ID: 1b36 00:07:47.759 Subsystem Vendor ID: 1af4 00:07:47.759 Serial Number: 12342 00:07:47.759 Model Number: QEMU NVMe Ctrl 00:07:47.759 Firmware Version: 8.0.0 00:07:47.759 Recommended Arb Burst: 6 00:07:47.759 IEEE OUI Identifier: 00 54 52 00:07:47.759 Multi-path I/O 00:07:47.759 May have multiple subsystem ports: No 00:07:47.759 May have multiple controllers: No 00:07:47.759 Associated with SR-IOV VF: No 00:07:47.759 Max Data Transfer Size: 524288 00:07:47.759 Max Number of Namespaces: 256 00:07:47.759 Max Number of I/O Queues: 64 00:07:47.759 NVMe Specification Version (VS): 1.4 00:07:47.759 NVMe Specification Version (Identify): 1.4 00:07:47.759 Maximum Queue Entries: 2048 00:07:47.759 Contiguous Queues Required: Yes 00:07:47.759 Arbitration Mechanisms Supported 00:07:47.759 Weighted Round Robin: Not Supported 00:07:47.759 Vendor Specific: Not Supported 00:07:47.759 Reset Timeout: 7500 ms 00:07:47.759 Doorbell Stride: 4 bytes 00:07:47.759 NVM Subsystem Reset: Not Supported 00:07:47.759 Command Sets Supported 00:07:47.759 NVM Command Set: Supported 00:07:47.759 Boot Partition: Not Supported 00:07:47.759 Memory Page Size Minimum: 4096 bytes 00:07:47.759 Memory Page Size Maximum: 65536 bytes 00:07:47.759 Persistent Memory Region: Not Supported 00:07:47.759 Optional Asynchronous Events Supported 00:07:47.759 Namespace Attribute Notices: Supported 00:07:47.759 Firmware Activation Notices: Not Supported 00:07:47.759 ANA Change Notices: Not Supported 00:07:47.759 PLE Aggregate Log Change Notices: Not Supported 00:07:47.759 LBA Status Info Alert Notices: Not Supported 00:07:47.759 EGE Aggregate Log Change Notices: Not Supported 00:07:47.759 Normal NVM Subsystem Shutdown event: Not Supported 00:07:47.759 Zone Descriptor Change Notices: Not Supported 00:07:47.759 Discovery Log Change Notices: Not Supported 00:07:47.759 Controller Attributes 00:07:47.759 128-bit Host Identifier: Not Supported 00:07:47.759 Non-Operational Permissive Mode: Not Supported 00:07:47.759 NVM Sets: Not Supported 00:07:47.759 Read Recovery Levels: Not Supported 00:07:47.759 Endurance Groups: Not Supported 00:07:47.759 Predictable Latency Mode: Not Supported 00:07:47.759 Traffic Based Keep ALive: Not Supported 00:07:47.759 Namespace Granularity: Not Supported 00:07:47.759 SQ Associations: Not Supported 00:07:47.759 UUID List: Not Supported 00:07:47.759 Multi-Domain Subsystem: Not Supported 00:07:47.759 Fixed Capacity Management: Not Supported 00:07:47.759 Variable Capacity Management: Not Supported 00:07:47.759 Delete Endurance Group: Not Supported 00:07:47.759 Delete NVM Set: Not Supported 00:07:47.759 Extended LBA Formats Supported: Supported 00:07:47.759 Flexible Data Placement Supported: Not Supported 00:07:47.759 00:07:47.759 Controller Memory Buffer Support 00:07:47.759 ================================ 00:07:47.759 Supported: No 00:07:47.759 00:07:47.759 Persistent Memory Region Support 00:07:47.759 ================================ 00:07:47.759 Supported: No 00:07:47.759 00:07:47.759 Admin Command Set Attributes 00:07:47.759 ============================ 00:07:47.759 Security Send/Receive: Not Supported 00:07:47.759 Format NVM: Supported 00:07:47.759 Firmware Activate/Download: Not Supported 00:07:47.759 Namespace Management: Supported 00:07:47.759 Device Self-Test: Not Supported 00:07:47.759 Directives: Supported 00:07:47.759 NVMe-MI: Not Supported 00:07:47.759 Virtualization Management: Not Supported 00:07:47.759 Doorbell Buffer Config: Supported 00:07:47.759 Get LBA Status Capability: Not Supported 00:07:47.759 Command & Feature Lockdown Capability: Not Supported 00:07:47.759 Abort Command Limit: 4 00:07:47.759 Async Event Request Limit: 4 00:07:47.759 Number of Firmware Slots: N/A 00:07:47.759 Firmware Slot 1 Read-Only: N/A 00:07:47.759 Firmware Activation Without Reset: N/A 00:07:47.759 Multiple Update Detection Support: N/A 00:07:47.759 Firmware Update Granularity: No Information Provided 00:07:47.759 Per-Namespace SMART Log: Yes 00:07:47.759 Asymmetric Namespace Access Log Page: Not Supported 00:07:47.759 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:47.759 Command Effects Log Page: Supported 00:07:47.759 Get Log Page Extended Data: Supported 00:07:47.759 Telemetry Log Pages: Not Supported 00:07:47.759 Persistent Event Log Pages: Not Supported 00:07:47.759 Supported Log Pages Log Page: May Support 00:07:47.759 Commands Supported & Effects Log Page: Not Supported 00:07:47.759 Feature Identifiers & Effects Log Page:May Support 00:07:47.759 NVMe-MI Commands & Effects Log Page: May Support 00:07:47.759 Data Area 4 for Telemetry Log: Not Supported 00:07:47.759 Error Log Page Entries Supported: 1 00:07:47.759 Keep Alive: Not Supported 00:07:47.760 00:07:47.760 NVM Command Set Attributes 00:07:47.760 ========================== 00:07:47.760 Submission Queue Entry Size 00:07:47.760 Max: 64 00:07:47.760 Min: 64 00:07:47.760 Completion Queue Entry Size 00:07:47.760 Max: 16 00:07:47.760 Min: 16 00:07:47.760 Number of Namespaces: 256 00:07:47.760 Compare Command: Supported 00:07:47.760 Write Uncorrectable Command: Not Supported 00:07:47.760 Dataset Management Command: Supported 00:07:47.760 Write Zeroes Command: Supported 00:07:47.760 Set Features Save Field: Supported 00:07:47.760 Reservations: Not Supported 00:07:47.760 Timestamp: Supported 00:07:47.760 Copy: Supported 00:07:47.760 Volatile Write Cache: Present 00:07:47.760 Atomic Write Unit (Normal): 1 00:07:47.760 Atomic Write Unit (PFail): 1 00:07:47.760 Atomic Compare & Write Unit: 1 00:07:47.760 Fused Compare & Write: Not Supported 00:07:47.760 Scatter-Gather List 00:07:47.760 SGL Command Set: Supported 00:07:47.760 SGL Keyed: Not Supported 00:07:47.760 SGL Bit Bucket Descriptor: Not Supported 00:07:47.760 SGL Metadata Pointer: Not Supported 00:07:47.760 Oversized SGL: Not Supported 00:07:47.760 SGL Metadata Address: Not Supported 00:07:47.760 SGL Offset: Not Supported 00:07:47.760 Transport SGL Data Block: Not Supported 00:07:47.760 Replay Protected Memory Block: Not Supported 00:07:47.760 00:07:47.760 Firmware Slot Information 00:07:47.760 ========================= 00:07:47.760 Active slot: 1 00:07:47.760 Slot 1 Firmware Revision: 1.0 00:07:47.760 00:07:47.760 00:07:47.760 Commands Supported and Effects 00:07:47.760 ============================== 00:07:47.760 Admin Commands 00:07:47.760 -------------- 00:07:47.760 Delete I/O Submission Queue (00h): Supported 00:07:47.760 Create I/O Submission Queue (01h): Supported 00:07:47.760 Get Log Page (02h): Supported 00:07:47.760 Delete I/O Completion Queue (04h): Supported 00:07:47.760 Create I/O Completion Queue (05h): Supported 00:07:47.760 Identify (06h): Supported 00:07:47.760 Abort (08h): Supported 00:07:47.760 Set Features (09h): Supported 00:07:47.760 Get Features (0Ah): Supported 00:07:47.760 Asynchronous Event Request (0Ch): Supported 00:07:47.760 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:47.760 Directive Send (19h): Supported 00:07:47.760 Directive Receive (1Ah): Supported 00:07:47.760 Virtualization Management (1Ch): Supported 00:07:47.760 Doorbell Buffer Config (7Ch): Supported 00:07:47.760 Format NVM (80h): Supported LBA-Change 00:07:47.760 I/O Commands 00:07:47.760 ------------ 00:07:47.760 Flush (00h): Supported LBA-Change 00:07:47.760 Write (01h): Supported LBA-Change 00:07:47.760 Read (02h): Supported 00:07:47.760 Compare (05h): Supported 00:07:47.760 Write Zeroes (08h): Supported LBA-Change 00:07:47.760 Dataset Management (09h): Supported LBA-Change 00:07:47.760 Unknown (0Ch): Supported 00:07:47.760 Unknown (12h): Supported 00:07:47.760 Copy (19h): Supported LBA-Change 00:07:47.760 Unknown (1Dh): Supported LBA-Change 00:07:47.760 00:07:47.760 Error Log 00:07:47.760 ========= 00:07:47.760 00:07:47.760 Arbitration 00:07:47.760 =========== 00:07:47.760 Arbitration Burst: no limit 00:07:47.760 00:07:47.760 Power Management 00:07:47.760 ================ 00:07:47.760 Number of Power States: 1 00:07:47.760 Current Power State: Power State #0 00:07:47.760 Power State #0: 00:07:47.760 Max Power: 25.00 W 00:07:47.760 Non-Operational State: Operational 00:07:47.760 Entry Latency: 16 microseconds 00:07:47.760 Exit Latency: 4 microseconds 00:07:47.760 Relative Read Throughput: 0 00:07:47.760 Relative Read Latency: 0 00:07:47.760 Relative Write Throughput: 0 00:07:47.760 Relative Write Latency: 0 00:07:47.760 Idle Power: Not Reported 00:07:47.760 Active Power: Not Reported 00:07:47.760 Non-Operational Permissive Mode: Not Supported 00:07:47.760 00:07:47.760 Health Information 00:07:47.760 ================== 00:07:47.760 Critical Warnings: 00:07:47.760 Available Spare Space: OK 00:07:47.760 Temperature: OK 00:07:47.760 Device Reliability: OK 00:07:47.760 Read Only: No 00:07:47.760 Volatile Memory Backup: OK 00:07:47.760 Current Temperature: 323 Kelvin (50 Celsius) 00:07:47.760 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:47.760 Available Spare: 0% 00:07:47.760 Available Spare Threshold: 0% 00:07:47.760 Life Percentage Used: 0% 00:07:47.760 Data Units Read: 2295 00:07:47.760 Data Units Written: 2082 00:07:47.760 Host Read Commands: 118429 00:07:47.760 Host Write Commands: 116700 00:07:47.760 Controller Busy Time: 0 minutes 00:07:47.760 Power Cycles: 0 00:07:47.760 Power On Hours: 0 hours 00:07:47.760 Unsafe Shutdowns: 0 00:07:47.760 Unrecoverable Media Errors: 0 00:07:47.760 Lifetime Error Log Entries: 0 00:07:47.760 Warning Temperature Time: 0 minutes 00:07:47.760 Critical Temperature Time: 0 minutes 00:07:47.760 00:07:47.760 Number of Queues 00:07:47.760 ================ 00:07:47.760 Number of I/O Submission Queues: 64 00:07:47.760 Number of I/O Completion Queues: 64 00:07:47.760 00:07:47.760 ZNS Specific Controller Data 00:07:47.760 ============================ 00:07:47.760 Zone Append Size Limit: 0 00:07:47.760 00:07:47.760 00:07:47.760 Active Namespaces 00:07:47.760 ================= 00:07:47.760 Namespace ID:1 00:07:47.760 Error Recovery Timeout: Unlimited 00:07:47.760 Command Set Identifier: NVM (00h) 00:07:47.760 Deallocate: Supported 00:07:47.760 Deallocated/Unwritten Error: Supported 00:07:47.760 Deallocated Read Value: All 0x00 00:07:47.760 Deallocate in Write Zeroes: Not Supported 00:07:47.760 Deallocated Guard Field: 0xFFFF 00:07:47.760 Flush: Supported 00:07:47.760 Reservation: Not Supported 00:07:47.760 Namespace Sharing Capabilities: Private 00:07:47.760 Size (in LBAs): 1048576 (4GiB) 00:07:47.760 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.760 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.760 Thin Provisioning: Not Supported 00:07:47.760 Per-NS Atomic Units: No 00:07:47.760 Maximum Single Source Range Length: 128 00:07:47.760 Maximum Copy Length: 128 00:07:47.760 Maximum Source Range Count: 128 00:07:47.760 NGUID/EUI64 Never Reused: No 00:07:47.760 Namespace Write Protected: No 00:07:47.760 Number of LBA Formats: 8 00:07:47.760 Current LBA Format: LBA Format #04 00:07:47.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.760 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.760 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.760 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.760 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.760 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.760 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.760 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.760 00:07:47.760 NVM Specific Namespace Data 00:07:47.760 =========================== 00:07:47.760 Logical Block Storage Tag Mask: 0 00:07:47.760 Protection Information Capabilities: 00:07:47.760 16b Guard Protection Information Storage Tag Support: No 00:07:47.760 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.760 Storage Tag Check Read Support: No 00:07:47.760 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.760 Namespace ID:2 00:07:47.760 Error Recovery Timeout: Unlimited 00:07:47.760 Command Set Identifier: NVM (00h) 00:07:47.760 Deallocate: Supported 00:07:47.760 Deallocated/Unwritten Error: Supported 00:07:47.760 Deallocated Read Value: All 0x00 00:07:47.760 Deallocate in Write Zeroes: Not Supported 00:07:47.760 Deallocated Guard Field: 0xFFFF 00:07:47.760 Flush: Supported 00:07:47.760 Reservation: Not Supported 00:07:47.760 Namespace Sharing Capabilities: Private 00:07:47.760 Size (in LBAs): 1048576 (4GiB) 00:07:47.760 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.760 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.760 Thin Provisioning: Not Supported 00:07:47.760 Per-NS Atomic Units: No 00:07:47.760 Maximum Single Source Range Length: 128 00:07:47.761 Maximum Copy Length: 128 00:07:47.761 Maximum Source Range Count: 128 00:07:47.761 NGUID/EUI64 Never Reused: No 00:07:47.761 Namespace Write Protected: No 00:07:47.761 Number of LBA Formats: 8 00:07:47.761 Current LBA Format: LBA Format #04 00:07:47.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.761 00:07:47.761 NVM Specific Namespace Data 00:07:47.761 =========================== 00:07:47.761 Logical Block Storage Tag Mask: 0 00:07:47.761 Protection Information Capabilities: 00:07:47.761 16b Guard Protection Information Storage Tag Support: No 00:07:47.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.761 Storage Tag Check Read Support: No 00:07:47.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Namespace ID:3 00:07:47.761 Error Recovery Timeout: Unlimited 00:07:47.761 Command Set Identifier: NVM (00h) 00:07:47.761 Deallocate: Supported 00:07:47.761 Deallocated/Unwritten Error: Supported 00:07:47.761 Deallocated Read Value: All 0x00 00:07:47.761 Deallocate in Write Zeroes: Not Supported 00:07:47.761 Deallocated Guard Field: 0xFFFF 00:07:47.761 Flush: Supported 00:07:47.761 Reservation: Not Supported 00:07:47.761 Namespace Sharing Capabilities: Private 00:07:47.761 Size (in LBAs): 1048576 (4GiB) 00:07:47.761 Capacity (in LBAs): 1048576 (4GiB) 00:07:47.761 Utilization (in LBAs): 1048576 (4GiB) 00:07:47.761 Thin Provisioning: Not Supported 00:07:47.761 Per-NS Atomic Units: No 00:07:47.761 Maximum Single Source Range Length: 128 00:07:47.761 Maximum Copy Length: 128 00:07:47.761 Maximum Source Range Count: 128 00:07:47.761 NGUID/EUI64 Never Reused: No 00:07:47.761 Namespace Write Protected: No 00:07:47.761 Number of LBA Formats: 8 00:07:47.761 Current LBA Format: LBA Format #04 00:07:47.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:47.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:47.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:47.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:47.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:47.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:47.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:47.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:47.761 00:07:47.761 NVM Specific Namespace Data 00:07:47.761 =========================== 00:07:47.761 Logical Block Storage Tag Mask: 0 00:07:47.761 Protection Information Capabilities: 00:07:47.761 16b Guard Protection Information Storage Tag Support: No 00:07:47.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:47.761 Storage Tag Check Read Support: No 00:07:47.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:47.761 09:36:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:47.761 09:36:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:48.023 ===================================================== 00:07:48.023 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.023 ===================================================== 00:07:48.023 Controller Capabilities/Features 00:07:48.023 ================================ 00:07:48.023 Vendor ID: 1b36 00:07:48.023 Subsystem Vendor ID: 1af4 00:07:48.023 Serial Number: 12343 00:07:48.023 Model Number: QEMU NVMe Ctrl 00:07:48.023 Firmware Version: 8.0.0 00:07:48.023 Recommended Arb Burst: 6 00:07:48.023 IEEE OUI Identifier: 00 54 52 00:07:48.023 Multi-path I/O 00:07:48.023 May have multiple subsystem ports: No 00:07:48.023 May have multiple controllers: Yes 00:07:48.023 Associated with SR-IOV VF: No 00:07:48.023 Max Data Transfer Size: 524288 00:07:48.023 Max Number of Namespaces: 256 00:07:48.023 Max Number of I/O Queues: 64 00:07:48.023 NVMe Specification Version (VS): 1.4 00:07:48.023 NVMe Specification Version (Identify): 1.4 00:07:48.023 Maximum Queue Entries: 2048 00:07:48.023 Contiguous Queues Required: Yes 00:07:48.023 Arbitration Mechanisms Supported 00:07:48.023 Weighted Round Robin: Not Supported 00:07:48.023 Vendor Specific: Not Supported 00:07:48.023 Reset Timeout: 7500 ms 00:07:48.023 Doorbell Stride: 4 bytes 00:07:48.023 NVM Subsystem Reset: Not Supported 00:07:48.023 Command Sets Supported 00:07:48.023 NVM Command Set: Supported 00:07:48.023 Boot Partition: Not Supported 00:07:48.023 Memory Page Size Minimum: 4096 bytes 00:07:48.023 Memory Page Size Maximum: 65536 bytes 00:07:48.023 Persistent Memory Region: Not Supported 00:07:48.023 Optional Asynchronous Events Supported 00:07:48.023 Namespace Attribute Notices: Supported 00:07:48.023 Firmware Activation Notices: Not Supported 00:07:48.023 ANA Change Notices: Not Supported 00:07:48.023 PLE Aggregate Log Change Notices: Not Supported 00:07:48.023 LBA Status Info Alert Notices: Not Supported 00:07:48.023 EGE Aggregate Log Change Notices: Not Supported 00:07:48.023 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.023 Zone Descriptor Change Notices: Not Supported 00:07:48.023 Discovery Log Change Notices: Not Supported 00:07:48.023 Controller Attributes 00:07:48.023 128-bit Host Identifier: Not Supported 00:07:48.023 Non-Operational Permissive Mode: Not Supported 00:07:48.023 NVM Sets: Not Supported 00:07:48.023 Read Recovery Levels: Not Supported 00:07:48.023 Endurance Groups: Supported 00:07:48.023 Predictable Latency Mode: Not Supported 00:07:48.023 Traffic Based Keep ALive: Not Supported 00:07:48.023 Namespace Granularity: Not Supported 00:07:48.023 SQ Associations: Not Supported 00:07:48.023 UUID List: Not Supported 00:07:48.023 Multi-Domain Subsystem: Not Supported 00:07:48.023 Fixed Capacity Management: Not Supported 00:07:48.023 Variable Capacity Management: Not Supported 00:07:48.023 Delete Endurance Group: Not Supported 00:07:48.023 Delete NVM Set: Not Supported 00:07:48.023 Extended LBA Formats Supported: Supported 00:07:48.023 Flexible Data Placement Supported: Supported 00:07:48.023 00:07:48.023 Controller Memory Buffer Support 00:07:48.023 ================================ 00:07:48.023 Supported: No 00:07:48.023 00:07:48.023 Persistent Memory Region Support 00:07:48.023 ================================ 00:07:48.023 Supported: No 00:07:48.023 00:07:48.023 Admin Command Set Attributes 00:07:48.023 ============================ 00:07:48.023 Security Send/Receive: Not Supported 00:07:48.023 Format NVM: Supported 00:07:48.023 Firmware Activate/Download: Not Supported 00:07:48.023 Namespace Management: Supported 00:07:48.023 Device Self-Test: Not Supported 00:07:48.023 Directives: Supported 00:07:48.023 NVMe-MI: Not Supported 00:07:48.023 Virtualization Management: Not Supported 00:07:48.023 Doorbell Buffer Config: Supported 00:07:48.023 Get LBA Status Capability: Not Supported 00:07:48.023 Command & Feature Lockdown Capability: Not Supported 00:07:48.023 Abort Command Limit: 4 00:07:48.023 Async Event Request Limit: 4 00:07:48.023 Number of Firmware Slots: N/A 00:07:48.023 Firmware Slot 1 Read-Only: N/A 00:07:48.023 Firmware Activation Without Reset: N/A 00:07:48.023 Multiple Update Detection Support: N/A 00:07:48.023 Firmware Update Granularity: No Information Provided 00:07:48.023 Per-Namespace SMART Log: Yes 00:07:48.023 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.023 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:48.023 Command Effects Log Page: Supported 00:07:48.023 Get Log Page Extended Data: Supported 00:07:48.023 Telemetry Log Pages: Not Supported 00:07:48.024 Persistent Event Log Pages: Not Supported 00:07:48.024 Supported Log Pages Log Page: May Support 00:07:48.024 Commands Supported & Effects Log Page: Not Supported 00:07:48.024 Feature Identifiers & Effects Log Page:May Support 00:07:48.024 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.024 Data Area 4 for Telemetry Log: Not Supported 00:07:48.024 Error Log Page Entries Supported: 1 00:07:48.024 Keep Alive: Not Supported 00:07:48.024 00:07:48.024 NVM Command Set Attributes 00:07:48.024 ========================== 00:07:48.024 Submission Queue Entry Size 00:07:48.024 Max: 64 00:07:48.024 Min: 64 00:07:48.024 Completion Queue Entry Size 00:07:48.024 Max: 16 00:07:48.024 Min: 16 00:07:48.024 Number of Namespaces: 256 00:07:48.024 Compare Command: Supported 00:07:48.024 Write Uncorrectable Command: Not Supported 00:07:48.024 Dataset Management Command: Supported 00:07:48.024 Write Zeroes Command: Supported 00:07:48.024 Set Features Save Field: Supported 00:07:48.024 Reservations: Not Supported 00:07:48.024 Timestamp: Supported 00:07:48.024 Copy: Supported 00:07:48.024 Volatile Write Cache: Present 00:07:48.024 Atomic Write Unit (Normal): 1 00:07:48.024 Atomic Write Unit (PFail): 1 00:07:48.024 Atomic Compare & Write Unit: 1 00:07:48.024 Fused Compare & Write: Not Supported 00:07:48.024 Scatter-Gather List 00:07:48.024 SGL Command Set: Supported 00:07:48.024 SGL Keyed: Not Supported 00:07:48.024 SGL Bit Bucket Descriptor: Not Supported 00:07:48.024 SGL Metadata Pointer: Not Supported 00:07:48.024 Oversized SGL: Not Supported 00:07:48.024 SGL Metadata Address: Not Supported 00:07:48.024 SGL Offset: Not Supported 00:07:48.024 Transport SGL Data Block: Not Supported 00:07:48.024 Replay Protected Memory Block: Not Supported 00:07:48.024 00:07:48.024 Firmware Slot Information 00:07:48.024 ========================= 00:07:48.024 Active slot: 1 00:07:48.024 Slot 1 Firmware Revision: 1.0 00:07:48.024 00:07:48.024 00:07:48.024 Commands Supported and Effects 00:07:48.024 ============================== 00:07:48.024 Admin Commands 00:07:48.024 -------------- 00:07:48.024 Delete I/O Submission Queue (00h): Supported 00:07:48.024 Create I/O Submission Queue (01h): Supported 00:07:48.024 Get Log Page (02h): Supported 00:07:48.024 Delete I/O Completion Queue (04h): Supported 00:07:48.024 Create I/O Completion Queue (05h): Supported 00:07:48.024 Identify (06h): Supported 00:07:48.024 Abort (08h): Supported 00:07:48.024 Set Features (09h): Supported 00:07:48.024 Get Features (0Ah): Supported 00:07:48.024 Asynchronous Event Request (0Ch): Supported 00:07:48.024 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.024 Directive Send (19h): Supported 00:07:48.024 Directive Receive (1Ah): Supported 00:07:48.024 Virtualization Management (1Ch): Supported 00:07:48.024 Doorbell Buffer Config (7Ch): Supported 00:07:48.024 Format NVM (80h): Supported LBA-Change 00:07:48.024 I/O Commands 00:07:48.024 ------------ 00:07:48.024 Flush (00h): Supported LBA-Change 00:07:48.024 Write (01h): Supported LBA-Change 00:07:48.024 Read (02h): Supported 00:07:48.024 Compare (05h): Supported 00:07:48.024 Write Zeroes (08h): Supported LBA-Change 00:07:48.024 Dataset Management (09h): Supported LBA-Change 00:07:48.024 Unknown (0Ch): Supported 00:07:48.024 Unknown (12h): Supported 00:07:48.024 Copy (19h): Supported LBA-Change 00:07:48.024 Unknown (1Dh): Supported LBA-Change 00:07:48.024 00:07:48.024 Error Log 00:07:48.024 ========= 00:07:48.024 00:07:48.024 Arbitration 00:07:48.024 =========== 00:07:48.024 Arbitration Burst: no limit 00:07:48.024 00:07:48.024 Power Management 00:07:48.024 ================ 00:07:48.024 Number of Power States: 1 00:07:48.024 Current Power State: Power State #0 00:07:48.024 Power State #0: 00:07:48.024 Max Power: 25.00 W 00:07:48.024 Non-Operational State: Operational 00:07:48.024 Entry Latency: 16 microseconds 00:07:48.024 Exit Latency: 4 microseconds 00:07:48.024 Relative Read Throughput: 0 00:07:48.024 Relative Read Latency: 0 00:07:48.024 Relative Write Throughput: 0 00:07:48.024 Relative Write Latency: 0 00:07:48.024 Idle Power: Not Reported 00:07:48.024 Active Power: Not Reported 00:07:48.024 Non-Operational Permissive Mode: Not Supported 00:07:48.024 00:07:48.024 Health Information 00:07:48.024 ================== 00:07:48.024 Critical Warnings: 00:07:48.024 Available Spare Space: OK 00:07:48.024 Temperature: OK 00:07:48.024 Device Reliability: OK 00:07:48.024 Read Only: No 00:07:48.024 Volatile Memory Backup: OK 00:07:48.024 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.024 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.024 Available Spare: 0% 00:07:48.024 Available Spare Threshold: 0% 00:07:48.024 Life Percentage Used: 0% 00:07:48.024 Data Units Read: 1078 00:07:48.024 Data Units Written: 1007 00:07:48.024 Host Read Commands: 42039 00:07:48.024 Host Write Commands: 41462 00:07:48.024 Controller Busy Time: 0 minutes 00:07:48.024 Power Cycles: 0 00:07:48.024 Power On Hours: 0 hours 00:07:48.024 Unsafe Shutdowns: 0 00:07:48.024 Unrecoverable Media Errors: 0 00:07:48.024 Lifetime Error Log Entries: 0 00:07:48.024 Warning Temperature Time: 0 minutes 00:07:48.024 Critical Temperature Time: 0 minutes 00:07:48.024 00:07:48.024 Number of Queues 00:07:48.024 ================ 00:07:48.024 Number of I/O Submission Queues: 64 00:07:48.024 Number of I/O Completion Queues: 64 00:07:48.024 00:07:48.024 ZNS Specific Controller Data 00:07:48.024 ============================ 00:07:48.024 Zone Append Size Limit: 0 00:07:48.024 00:07:48.024 00:07:48.024 Active Namespaces 00:07:48.024 ================= 00:07:48.024 Namespace ID:1 00:07:48.024 Error Recovery Timeout: Unlimited 00:07:48.024 Command Set Identifier: NVM (00h) 00:07:48.024 Deallocate: Supported 00:07:48.024 Deallocated/Unwritten Error: Supported 00:07:48.024 Deallocated Read Value: All 0x00 00:07:48.024 Deallocate in Write Zeroes: Not Supported 00:07:48.024 Deallocated Guard Field: 0xFFFF 00:07:48.024 Flush: Supported 00:07:48.024 Reservation: Not Supported 00:07:48.024 Namespace Sharing Capabilities: Multiple Controllers 00:07:48.024 Size (in LBAs): 262144 (1GiB) 00:07:48.024 Capacity (in LBAs): 262144 (1GiB) 00:07:48.024 Utilization (in LBAs): 262144 (1GiB) 00:07:48.024 Thin Provisioning: Not Supported 00:07:48.024 Per-NS Atomic Units: No 00:07:48.024 Maximum Single Source Range Length: 128 00:07:48.024 Maximum Copy Length: 128 00:07:48.024 Maximum Source Range Count: 128 00:07:48.024 NGUID/EUI64 Never Reused: No 00:07:48.024 Namespace Write Protected: No 00:07:48.024 Endurance group ID: 1 00:07:48.024 Number of LBA Formats: 8 00:07:48.024 Current LBA Format: LBA Format #04 00:07:48.024 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.024 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.024 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.024 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.024 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.024 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.024 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.024 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.024 00:07:48.024 Get Feature FDP: 00:07:48.024 ================ 00:07:48.024 Enabled: Yes 00:07:48.024 FDP configuration index: 0 00:07:48.024 00:07:48.024 FDP configurations log page 00:07:48.024 =========================== 00:07:48.024 Number of FDP configurations: 1 00:07:48.024 Version: 0 00:07:48.024 Size: 112 00:07:48.024 FDP Configuration Descriptor: 0 00:07:48.024 Descriptor Size: 96 00:07:48.024 Reclaim Group Identifier format: 2 00:07:48.024 FDP Volatile Write Cache: Not Present 00:07:48.024 FDP Configuration: Valid 00:07:48.024 Vendor Specific Size: 0 00:07:48.024 Number of Reclaim Groups: 2 00:07:48.024 Number of Recalim Unit Handles: 8 00:07:48.024 Max Placement Identifiers: 128 00:07:48.024 Number of Namespaces Suppprted: 256 00:07:48.024 Reclaim unit Nominal Size: 6000000 bytes 00:07:48.024 Estimated Reclaim Unit Time Limit: Not Reported 00:07:48.024 RUH Desc #000: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #001: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #002: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #003: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #004: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #005: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #006: RUH Type: Initially Isolated 00:07:48.025 RUH Desc #007: RUH Type: Initially Isolated 00:07:48.025 00:07:48.025 FDP reclaim unit handle usage log page 00:07:48.025 ====================================== 00:07:48.025 Number of Reclaim Unit Handles: 8 00:07:48.025 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:48.025 RUH Usage Desc #001: RUH Attributes: Unused 00:07:48.025 RUH Usage Desc #002: RUH Attributes: Unused 00:07:48.025 RUH Usage Desc #003: RUH Attributes: Unused 00:07:48.025 RUH Usage Desc #004: RUH Attributes: Unused 00:07:48.025 RUH Usage Desc #005: RUH Attributes: Unused 00:07:48.025 RUH Usage Desc #006: RUH Attributes: Unused 00:07:48.025 RUH Usage Desc #007: RUH Attributes: Unused 00:07:48.025 00:07:48.025 FDP statistics log page 00:07:48.025 ======================= 00:07:48.025 Host bytes with metadata written: 606314496 00:07:48.025 Media bytes with metadata written: 606396416 00:07:48.025 Media bytes erased: 0 00:07:48.025 00:07:48.025 FDP events log page 00:07:48.025 =================== 00:07:48.025 Number of FDP events: 0 00:07:48.025 00:07:48.025 NVM Specific Namespace Data 00:07:48.025 =========================== 00:07:48.025 Logical Block Storage Tag Mask: 0 00:07:48.025 Protection Information Capabilities: 00:07:48.025 16b Guard Protection Information Storage Tag Support: No 00:07:48.025 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.025 Storage Tag Check Read Support: No 00:07:48.025 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.025 00:07:48.025 real 0m1.234s 00:07:48.025 user 0m0.444s 00:07:48.025 sys 0m0.568s 00:07:48.025 09:36:15 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.025 09:36:15 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:48.025 ************************************ 00:07:48.025 END TEST nvme_identify 00:07:48.025 ************************************ 00:07:48.025 09:36:15 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:48.025 09:36:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:48.025 09:36:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.025 09:36:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.025 ************************************ 00:07:48.025 START TEST nvme_perf 00:07:48.025 ************************************ 00:07:48.025 09:36:15 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:07:48.025 09:36:15 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:49.410 Initializing NVMe Controllers 00:07:49.410 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.410 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:49.410 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:49.410 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:49.410 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:49.410 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:49.410 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:49.410 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:49.410 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:49.410 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:49.410 Initialization complete. Launching workers. 00:07:49.410 ======================================================== 00:07:49.410 Latency(us) 00:07:49.410 Device Information : IOPS MiB/s Average min max 00:07:49.410 PCIE (0000:00:13.0) NSID 1 from core 0: 12066.60 141.41 10629.28 5669.73 39144.46 00:07:49.410 PCIE (0000:00:10.0) NSID 1 from core 0: 12066.60 141.41 10613.31 5572.33 37847.01 00:07:49.410 PCIE (0000:00:11.0) NSID 1 from core 0: 12066.60 141.41 10596.67 5661.18 36263.82 00:07:49.410 PCIE (0000:00:12.0) NSID 1 from core 0: 12066.60 141.41 10578.86 5672.27 35390.33 00:07:49.410 PCIE (0000:00:12.0) NSID 2 from core 0: 12066.60 141.41 10560.99 5677.24 33686.74 00:07:49.410 PCIE (0000:00:12.0) NSID 3 from core 0: 12130.11 142.15 10488.12 5687.79 26257.66 00:07:49.410 ======================================================== 00:07:49.410 Total : 72463.14 849.18 10577.79 5572.33 39144.46 00:07:49.410 00:07:49.410 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:49.410 ================================================================================= 00:07:49.410 1.00000% : 5797.415us 00:07:49.410 10.00000% : 6024.271us 00:07:49.410 25.00000% : 6351.951us 00:07:49.410 50.00000% : 8519.680us 00:07:49.410 75.00000% : 15022.868us 00:07:49.410 90.00000% : 16736.886us 00:07:49.410 95.00000% : 17341.834us 00:07:49.410 98.00000% : 18148.431us 00:07:49.410 99.00000% : 30247.385us 00:07:49.410 99.50000% : 37708.406us 00:07:49.410 99.90000% : 38918.302us 00:07:49.410 99.99000% : 39119.951us 00:07:49.410 99.99900% : 39321.600us 00:07:49.410 99.99990% : 39321.600us 00:07:49.410 99.99999% : 39321.600us 00:07:49.410 00:07:49.410 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:49.410 ================================================================================= 00:07:49.410 1.00000% : 5721.797us 00:07:49.410 10.00000% : 5999.065us 00:07:49.410 25.00000% : 6377.157us 00:07:49.410 50.00000% : 8469.268us 00:07:49.410 75.00000% : 15123.692us 00:07:49.410 90.00000% : 16736.886us 00:07:49.410 95.00000% : 17341.834us 00:07:49.410 98.00000% : 18047.606us 00:07:49.410 99.00000% : 28634.191us 00:07:49.410 99.50000% : 36296.862us 00:07:49.410 99.90000% : 37506.757us 00:07:49.410 99.99000% : 37910.055us 00:07:49.410 99.99900% : 37910.055us 00:07:49.410 99.99990% : 37910.055us 00:07:49.410 99.99999% : 37910.055us 00:07:49.410 00:07:49.410 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:49.410 ================================================================================= 00:07:49.410 1.00000% : 5772.209us 00:07:49.410 10.00000% : 6049.477us 00:07:49.410 25.00000% : 6351.951us 00:07:49.410 50.00000% : 8318.031us 00:07:49.410 75.00000% : 15123.692us 00:07:49.410 90.00000% : 16736.886us 00:07:49.410 95.00000% : 17442.658us 00:07:49.410 98.00000% : 18047.606us 00:07:49.410 99.00000% : 27020.997us 00:07:49.410 99.50000% : 34683.668us 00:07:49.410 99.90000% : 36095.212us 00:07:49.410 99.99000% : 36296.862us 00:07:49.410 99.99900% : 36296.862us 00:07:49.410 99.99990% : 36296.862us 00:07:49.410 99.99999% : 36296.862us 00:07:49.410 00:07:49.410 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:49.410 ================================================================================= 00:07:49.410 1.00000% : 5797.415us 00:07:49.410 10.00000% : 6024.271us 00:07:49.410 25.00000% : 6351.951us 00:07:49.410 50.00000% : 8469.268us 00:07:49.410 75.00000% : 15123.692us 00:07:49.410 90.00000% : 16636.062us 00:07:49.410 95.00000% : 17341.834us 00:07:49.410 98.00000% : 18249.255us 00:07:49.410 99.00000% : 27020.997us 00:07:49.410 99.50000% : 33877.071us 00:07:49.410 99.90000% : 35086.966us 00:07:49.410 99.99000% : 35490.265us 00:07:49.410 99.99900% : 35490.265us 00:07:49.410 99.99990% : 35490.265us 00:07:49.410 99.99999% : 35490.265us 00:07:49.410 00:07:49.410 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:49.410 ================================================================================= 00:07:49.410 1.00000% : 5797.415us 00:07:49.410 10.00000% : 6024.271us 00:07:49.410 25.00000% : 6351.951us 00:07:49.410 50.00000% : 8620.505us 00:07:49.410 75.00000% : 15022.868us 00:07:49.410 90.00000% : 16636.062us 00:07:49.410 95.00000% : 17442.658us 00:07:49.410 98.00000% : 18350.080us 00:07:49.410 99.00000% : 25206.154us 00:07:49.410 99.50000% : 32263.877us 00:07:49.410 99.90000% : 33473.772us 00:07:49.410 99.99000% : 33675.422us 00:07:49.410 99.99900% : 33877.071us 00:07:49.410 99.99990% : 33877.071us 00:07:49.410 99.99999% : 33877.071us 00:07:49.410 00:07:49.410 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:49.410 ================================================================================= 00:07:49.410 1.00000% : 5797.415us 00:07:49.410 10.00000% : 6024.271us 00:07:49.410 25.00000% : 6351.951us 00:07:49.410 50.00000% : 8721.329us 00:07:49.410 75.00000% : 15022.868us 00:07:49.410 90.00000% : 16636.062us 00:07:49.410 95.00000% : 17341.834us 00:07:49.410 98.00000% : 18148.431us 00:07:49.410 99.00000% : 18753.378us 00:07:49.410 99.50000% : 24702.031us 00:07:49.410 99.90000% : 26012.751us 00:07:49.410 99.99000% : 26416.049us 00:07:49.410 99.99900% : 26416.049us 00:07:49.410 99.99990% : 26416.049us 00:07:49.410 99.99999% : 26416.049us 00:07:49.410 00:07:49.410 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:49.410 ============================================================================== 00:07:49.410 Range in us Cumulative IO count 00:07:49.410 5646.178 - 5671.385: 0.0247% ( 3) 00:07:49.410 5671.385 - 5696.591: 0.1398% ( 14) 00:07:49.410 5696.591 - 5721.797: 0.2878% ( 18) 00:07:49.410 5721.797 - 5747.003: 0.6003% ( 38) 00:07:49.410 5747.003 - 5772.209: 0.9868% ( 47) 00:07:49.410 5772.209 - 5797.415: 1.5132% ( 64) 00:07:49.410 5797.415 - 5822.622: 2.2286% ( 87) 00:07:49.410 5822.622 - 5847.828: 3.0510% ( 100) 00:07:49.410 5847.828 - 5873.034: 4.0049% ( 116) 00:07:49.410 5873.034 - 5898.240: 4.9836% ( 119) 00:07:49.410 5898.240 - 5923.446: 5.9211% ( 114) 00:07:49.410 5923.446 - 5948.652: 6.9819% ( 129) 00:07:49.410 5948.652 - 5973.858: 7.9934% ( 123) 00:07:49.410 5973.858 - 5999.065: 9.0461% ( 128) 00:07:49.410 5999.065 - 6024.271: 10.1234% ( 131) 00:07:49.410 6024.271 - 6049.477: 11.2418% ( 136) 00:07:49.410 6049.477 - 6074.683: 12.3849% ( 139) 00:07:49.410 6074.683 - 6099.889: 13.5115% ( 137) 00:07:49.410 6099.889 - 6125.095: 14.6546% ( 139) 00:07:49.410 6125.095 - 6150.302: 15.8470% ( 145) 00:07:49.410 6150.302 - 6175.508: 17.0312% ( 144) 00:07:49.410 6175.508 - 6200.714: 18.1497% ( 136) 00:07:49.410 6200.714 - 6225.920: 19.3092% ( 141) 00:07:49.410 6225.920 - 6251.126: 20.4523% ( 139) 00:07:49.410 6251.126 - 6276.332: 21.6365% ( 144) 00:07:49.410 6276.332 - 6301.538: 22.7878% ( 140) 00:07:49.410 6301.538 - 6326.745: 23.9720% ( 144) 00:07:49.411 6326.745 - 6351.951: 25.1809% ( 147) 00:07:49.411 6351.951 - 6377.157: 26.3816% ( 146) 00:07:49.411 6377.157 - 6402.363: 27.6234% ( 151) 00:07:49.411 6402.363 - 6427.569: 28.7993% ( 143) 00:07:49.411 6427.569 - 6452.775: 29.9671% ( 142) 00:07:49.411 6452.775 - 6503.188: 32.4836% ( 306) 00:07:49.411 6503.188 - 6553.600: 34.8520% ( 288) 00:07:49.411 6553.600 - 6604.012: 36.7681% ( 233) 00:07:49.411 6604.012 - 6654.425: 38.2812% ( 184) 00:07:49.411 6654.425 - 6704.837: 39.3503% ( 130) 00:07:49.411 6704.837 - 6755.249: 40.0822% ( 89) 00:07:49.411 6755.249 - 6805.662: 40.6908% ( 74) 00:07:49.411 6805.662 - 6856.074: 41.2418% ( 67) 00:07:49.411 6856.074 - 6906.486: 41.6530% ( 50) 00:07:49.411 6906.486 - 6956.898: 42.0148% ( 44) 00:07:49.411 6956.898 - 7007.311: 42.3355% ( 39) 00:07:49.411 7007.311 - 7057.723: 42.6727% ( 41) 00:07:49.411 7057.723 - 7108.135: 43.0016% ( 40) 00:07:49.411 7108.135 - 7158.548: 43.3470% ( 42) 00:07:49.411 7158.548 - 7208.960: 43.6349% ( 35) 00:07:49.411 7208.960 - 7259.372: 43.8898% ( 31) 00:07:49.411 7259.372 - 7309.785: 44.1201% ( 28) 00:07:49.411 7309.785 - 7360.197: 44.3257% ( 25) 00:07:49.411 7360.197 - 7410.609: 44.5395% ( 26) 00:07:49.411 7410.609 - 7461.022: 44.7780% ( 29) 00:07:49.411 7461.022 - 7511.434: 45.0822% ( 37) 00:07:49.411 7511.434 - 7561.846: 45.3536% ( 33) 00:07:49.411 7561.846 - 7612.258: 45.6250% ( 33) 00:07:49.411 7612.258 - 7662.671: 45.8470% ( 27) 00:07:49.411 7662.671 - 7713.083: 46.0773% ( 28) 00:07:49.411 7713.083 - 7763.495: 46.3569% ( 34) 00:07:49.411 7763.495 - 7813.908: 46.6201% ( 32) 00:07:49.411 7813.908 - 7864.320: 46.8668% ( 30) 00:07:49.411 7864.320 - 7914.732: 47.0395% ( 21) 00:07:49.411 7914.732 - 7965.145: 47.2533% ( 26) 00:07:49.411 7965.145 - 8015.557: 47.4753% ( 27) 00:07:49.411 8015.557 - 8065.969: 47.6809% ( 25) 00:07:49.411 8065.969 - 8116.382: 47.9770% ( 36) 00:07:49.411 8116.382 - 8166.794: 48.2484% ( 33) 00:07:49.411 8166.794 - 8217.206: 48.5115% ( 32) 00:07:49.411 8217.206 - 8267.618: 48.8076% ( 36) 00:07:49.411 8267.618 - 8318.031: 49.0543% ( 30) 00:07:49.411 8318.031 - 8368.443: 49.3339% ( 34) 00:07:49.411 8368.443 - 8418.855: 49.5970% ( 32) 00:07:49.411 8418.855 - 8469.268: 49.8520% ( 31) 00:07:49.411 8469.268 - 8519.680: 50.1234% ( 33) 00:07:49.411 8519.680 - 8570.092: 50.4112% ( 35) 00:07:49.411 8570.092 - 8620.505: 50.6743% ( 32) 00:07:49.411 8620.505 - 8670.917: 50.9293% ( 31) 00:07:49.411 8670.917 - 8721.329: 51.1595% ( 28) 00:07:49.411 8721.329 - 8771.742: 51.4062% ( 30) 00:07:49.411 8771.742 - 8822.154: 51.6447% ( 29) 00:07:49.411 8822.154 - 8872.566: 51.8586% ( 26) 00:07:49.411 8872.566 - 8922.978: 52.0806% ( 27) 00:07:49.411 8922.978 - 8973.391: 52.2862% ( 25) 00:07:49.411 8973.391 - 9023.803: 52.5082% ( 27) 00:07:49.411 9023.803 - 9074.215: 52.7220% ( 26) 00:07:49.411 9074.215 - 9124.628: 52.9441% ( 27) 00:07:49.411 9124.628 - 9175.040: 53.1414% ( 24) 00:07:49.411 9175.040 - 9225.452: 53.3470% ( 25) 00:07:49.411 9225.452 - 9275.865: 53.5444% ( 24) 00:07:49.411 9275.865 - 9326.277: 53.7500% ( 25) 00:07:49.411 9326.277 - 9376.689: 54.0049% ( 31) 00:07:49.411 9376.689 - 9427.102: 54.2516% ( 30) 00:07:49.411 9427.102 - 9477.514: 54.5230% ( 33) 00:07:49.411 9477.514 - 9527.926: 54.7368% ( 26) 00:07:49.411 9527.926 - 9578.338: 54.9424% ( 25) 00:07:49.411 9578.338 - 9628.751: 55.1316% ( 23) 00:07:49.411 9628.751 - 9679.163: 55.2714% ( 17) 00:07:49.411 9679.163 - 9729.575: 55.4359% ( 20) 00:07:49.411 9729.575 - 9779.988: 55.5921% ( 19) 00:07:49.411 9779.988 - 9830.400: 55.6990% ( 13) 00:07:49.411 9830.400 - 9880.812: 55.8635% ( 20) 00:07:49.411 9880.812 - 9931.225: 56.0280% ( 20) 00:07:49.411 9931.225 - 9981.637: 56.1924% ( 20) 00:07:49.411 9981.637 - 10032.049: 56.3240% ( 16) 00:07:49.411 10032.049 - 10082.462: 56.4885% ( 20) 00:07:49.411 10082.462 - 10132.874: 56.6776% ( 23) 00:07:49.411 10132.874 - 10183.286: 56.8174% ( 17) 00:07:49.411 10183.286 - 10233.698: 56.8997% ( 10) 00:07:49.411 10233.698 - 10284.111: 56.9819% ( 10) 00:07:49.411 10284.111 - 10334.523: 57.0724% ( 11) 00:07:49.411 10334.523 - 10384.935: 57.1628% ( 11) 00:07:49.411 10384.935 - 10435.348: 57.2615% ( 12) 00:07:49.411 10435.348 - 10485.760: 57.3520% ( 11) 00:07:49.411 10485.760 - 10536.172: 57.4424% ( 11) 00:07:49.411 10536.172 - 10586.585: 57.5411% ( 12) 00:07:49.411 10586.585 - 10636.997: 57.6398% ( 12) 00:07:49.411 10636.997 - 10687.409: 57.7303% ( 11) 00:07:49.411 10687.409 - 10737.822: 57.7878% ( 7) 00:07:49.411 10737.822 - 10788.234: 57.8207% ( 4) 00:07:49.411 10788.234 - 10838.646: 57.8536% ( 4) 00:07:49.411 10838.646 - 10889.058: 57.8783% ( 3) 00:07:49.411 10889.058 - 10939.471: 57.8947% ( 2) 00:07:49.411 11040.295 - 11090.708: 57.9030% ( 1) 00:07:49.411 11090.708 - 11141.120: 57.9359% ( 4) 00:07:49.411 11141.120 - 11191.532: 57.9934% ( 7) 00:07:49.411 11191.532 - 11241.945: 58.0592% ( 8) 00:07:49.411 11241.945 - 11292.357: 58.1168% ( 7) 00:07:49.411 11292.357 - 11342.769: 58.1661% ( 6) 00:07:49.411 11342.769 - 11393.182: 58.2237% ( 7) 00:07:49.411 11393.182 - 11443.594: 58.2812% ( 7) 00:07:49.411 11443.594 - 11494.006: 58.3306% ( 6) 00:07:49.411 11494.006 - 11544.418: 58.3799% ( 6) 00:07:49.411 11544.418 - 11594.831: 58.4457% ( 8) 00:07:49.411 11594.831 - 11645.243: 58.5609% ( 14) 00:07:49.411 11645.243 - 11695.655: 58.6760% ( 14) 00:07:49.411 11695.655 - 11746.068: 58.7829% ( 13) 00:07:49.411 11746.068 - 11796.480: 58.8734% ( 11) 00:07:49.411 11796.480 - 11846.892: 59.0296% ( 19) 00:07:49.411 11846.892 - 11897.305: 59.1530% ( 15) 00:07:49.411 11897.305 - 11947.717: 59.3174% ( 20) 00:07:49.411 11947.717 - 11998.129: 59.4819% ( 20) 00:07:49.411 11998.129 - 12048.542: 59.5888% ( 13) 00:07:49.411 12048.542 - 12098.954: 59.7122% ( 15) 00:07:49.411 12098.954 - 12149.366: 59.8273% ( 14) 00:07:49.411 12149.366 - 12199.778: 59.9507% ( 15) 00:07:49.411 12199.778 - 12250.191: 60.0576% ( 13) 00:07:49.411 12250.191 - 12300.603: 60.1727% ( 14) 00:07:49.411 12300.603 - 12351.015: 60.2961% ( 15) 00:07:49.411 12351.015 - 12401.428: 60.4441% ( 18) 00:07:49.411 12401.428 - 12451.840: 60.5921% ( 18) 00:07:49.411 12451.840 - 12502.252: 60.7155% ( 15) 00:07:49.411 12502.252 - 12552.665: 60.8306% ( 14) 00:07:49.411 12552.665 - 12603.077: 60.9539% ( 15) 00:07:49.411 12603.077 - 12653.489: 61.0691% ( 14) 00:07:49.411 12653.489 - 12703.902: 61.1924% ( 15) 00:07:49.411 12703.902 - 12754.314: 61.3158% ( 15) 00:07:49.411 12754.314 - 12804.726: 61.4803% ( 20) 00:07:49.411 12804.726 - 12855.138: 61.6365% ( 19) 00:07:49.411 12855.138 - 12905.551: 61.7845% ( 18) 00:07:49.411 12905.551 - 13006.375: 62.0888% ( 37) 00:07:49.411 13006.375 - 13107.200: 62.3931% ( 37) 00:07:49.411 13107.200 - 13208.025: 62.8865% ( 60) 00:07:49.411 13208.025 - 13308.849: 63.3717% ( 59) 00:07:49.411 13308.849 - 13409.674: 63.8651% ( 60) 00:07:49.411 13409.674 - 13510.498: 64.4079% ( 66) 00:07:49.411 13510.498 - 13611.323: 64.9671% ( 68) 00:07:49.411 13611.323 - 13712.148: 65.6168% ( 79) 00:07:49.411 13712.148 - 13812.972: 66.3076% ( 84) 00:07:49.411 13812.972 - 13913.797: 66.8914% ( 71) 00:07:49.411 13913.797 - 14014.622: 67.5000% ( 74) 00:07:49.411 14014.622 - 14115.446: 68.1414% ( 78) 00:07:49.411 14115.446 - 14216.271: 68.8322% ( 84) 00:07:49.411 14216.271 - 14317.095: 69.5230% ( 84) 00:07:49.411 14317.095 - 14417.920: 70.1727% ( 79) 00:07:49.411 14417.920 - 14518.745: 70.8635% ( 84) 00:07:49.411 14518.745 - 14619.569: 71.8339% ( 118) 00:07:49.411 14619.569 - 14720.394: 72.7961% ( 117) 00:07:49.411 14720.394 - 14821.218: 73.5691% ( 94) 00:07:49.411 14821.218 - 14922.043: 74.2845% ( 87) 00:07:49.411 14922.043 - 15022.868: 75.0740% ( 96) 00:07:49.411 15022.868 - 15123.692: 75.7648% ( 84) 00:07:49.411 15123.692 - 15224.517: 76.4227% ( 80) 00:07:49.411 15224.517 - 15325.342: 77.1135% ( 84) 00:07:49.411 15325.342 - 15426.166: 77.8618% ( 91) 00:07:49.411 15426.166 - 15526.991: 78.5609% ( 85) 00:07:49.411 15526.991 - 15627.815: 79.4079% ( 103) 00:07:49.411 15627.815 - 15728.640: 80.1645% ( 92) 00:07:49.411 15728.640 - 15829.465: 81.0280% ( 105) 00:07:49.411 15829.465 - 15930.289: 81.8174% ( 96) 00:07:49.411 15930.289 - 16031.114: 82.7878% ( 118) 00:07:49.411 16031.114 - 16131.938: 83.7089% ( 112) 00:07:49.411 16131.938 - 16232.763: 84.8520% ( 139) 00:07:49.411 16232.763 - 16333.588: 85.9622% ( 135) 00:07:49.411 16333.588 - 16434.412: 87.3849% ( 173) 00:07:49.411 16434.412 - 16535.237: 88.6595% ( 155) 00:07:49.411 16535.237 - 16636.062: 89.9095% ( 152) 00:07:49.411 16636.062 - 16736.886: 91.1349% ( 149) 00:07:49.411 16736.886 - 16837.711: 92.1053% ( 118) 00:07:49.411 16837.711 - 16938.535: 92.9688% ( 105) 00:07:49.411 16938.535 - 17039.360: 93.7747% ( 98) 00:07:49.411 17039.360 - 17140.185: 94.3503% ( 70) 00:07:49.411 17140.185 - 17241.009: 94.9013% ( 67) 00:07:49.411 17241.009 - 17341.834: 95.4030% ( 61) 00:07:49.411 17341.834 - 17442.658: 95.9046% ( 61) 00:07:49.411 17442.658 - 17543.483: 96.3076% ( 49) 00:07:49.411 17543.483 - 17644.308: 96.6530% ( 42) 00:07:49.411 17644.308 - 17745.132: 96.9572% ( 37) 00:07:49.412 17745.132 - 17845.957: 97.3109% ( 43) 00:07:49.412 17845.957 - 17946.782: 97.6069% ( 36) 00:07:49.412 17946.782 - 18047.606: 97.8947% ( 35) 00:07:49.412 18047.606 - 18148.431: 98.1579% ( 32) 00:07:49.412 18148.431 - 18249.255: 98.3306% ( 21) 00:07:49.412 18249.255 - 18350.080: 98.5197% ( 23) 00:07:49.412 18350.080 - 18450.905: 98.6513% ( 16) 00:07:49.412 18450.905 - 18551.729: 98.7336% ( 10) 00:07:49.412 18551.729 - 18652.554: 98.8158% ( 10) 00:07:49.412 18652.554 - 18753.378: 98.8980% ( 10) 00:07:49.412 18753.378 - 18854.203: 98.9474% ( 6) 00:07:49.412 30045.735 - 30247.385: 99.0214% ( 9) 00:07:49.412 30247.385 - 30449.034: 99.0789% ( 7) 00:07:49.412 30449.034 - 30650.683: 99.1530% ( 9) 00:07:49.412 30650.683 - 30852.332: 99.2188% ( 8) 00:07:49.412 30852.332 - 31053.982: 99.2845% ( 8) 00:07:49.412 31053.982 - 31255.631: 99.3503% ( 8) 00:07:49.412 31255.631 - 31457.280: 99.4161% ( 8) 00:07:49.412 31457.280 - 31658.929: 99.4737% ( 7) 00:07:49.412 37506.757 - 37708.406: 99.5395% ( 8) 00:07:49.412 37708.406 - 37910.055: 99.5970% ( 7) 00:07:49.412 37910.055 - 38111.705: 99.6628% ( 8) 00:07:49.412 38111.705 - 38313.354: 99.7286% ( 8) 00:07:49.412 38313.354 - 38515.003: 99.7944% ( 8) 00:07:49.412 38515.003 - 38716.652: 99.8602% ( 8) 00:07:49.412 38716.652 - 38918.302: 99.9260% ( 8) 00:07:49.412 38918.302 - 39119.951: 99.9918% ( 8) 00:07:49.412 39119.951 - 39321.600: 100.0000% ( 1) 00:07:49.412 00:07:49.412 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:49.412 ============================================================================== 00:07:49.412 Range in us Cumulative IO count 00:07:49.412 5570.560 - 5595.766: 0.0247% ( 3) 00:07:49.412 5595.766 - 5620.972: 0.1069% ( 10) 00:07:49.412 5620.972 - 5646.178: 0.3043% ( 24) 00:07:49.412 5646.178 - 5671.385: 0.6661% ( 44) 00:07:49.412 5671.385 - 5696.591: 0.9868% ( 39) 00:07:49.412 5696.591 - 5721.797: 1.4803% ( 60) 00:07:49.412 5721.797 - 5747.003: 2.1546% ( 82) 00:07:49.412 5747.003 - 5772.209: 2.8618% ( 86) 00:07:49.412 5772.209 - 5797.415: 3.6595% ( 97) 00:07:49.412 5797.415 - 5822.622: 4.5395% ( 107) 00:07:49.412 5822.622 - 5847.828: 5.2549% ( 87) 00:07:49.412 5847.828 - 5873.034: 6.1102% ( 104) 00:07:49.412 5873.034 - 5898.240: 6.9655% ( 104) 00:07:49.412 5898.240 - 5923.446: 7.8536% ( 108) 00:07:49.412 5923.446 - 5948.652: 8.7418% ( 108) 00:07:49.412 5948.652 - 5973.858: 9.7039% ( 117) 00:07:49.412 5973.858 - 5999.065: 10.7401% ( 126) 00:07:49.412 5999.065 - 6024.271: 11.6447% ( 110) 00:07:49.412 6024.271 - 6049.477: 12.5822% ( 114) 00:07:49.412 6049.477 - 6074.683: 13.5773% ( 121) 00:07:49.412 6074.683 - 6099.889: 14.4819% ( 110) 00:07:49.412 6099.889 - 6125.095: 15.4770% ( 121) 00:07:49.412 6125.095 - 6150.302: 16.4062% ( 113) 00:07:49.412 6150.302 - 6175.508: 17.3684% ( 117) 00:07:49.412 6175.508 - 6200.714: 18.2895% ( 112) 00:07:49.412 6200.714 - 6225.920: 19.3257% ( 126) 00:07:49.412 6225.920 - 6251.126: 20.3701% ( 127) 00:07:49.412 6251.126 - 6276.332: 21.2829% ( 111) 00:07:49.412 6276.332 - 6301.538: 22.4836% ( 146) 00:07:49.412 6301.538 - 6326.745: 23.3799% ( 109) 00:07:49.412 6326.745 - 6351.951: 24.4572% ( 131) 00:07:49.412 6351.951 - 6377.157: 25.3865% ( 113) 00:07:49.412 6377.157 - 6402.363: 26.5049% ( 136) 00:07:49.412 6402.363 - 6427.569: 27.4753% ( 118) 00:07:49.412 6427.569 - 6452.775: 28.6020% ( 137) 00:07:49.412 6452.775 - 6503.188: 30.6003% ( 243) 00:07:49.412 6503.188 - 6553.600: 32.6398% ( 248) 00:07:49.412 6553.600 - 6604.012: 34.6711% ( 247) 00:07:49.412 6604.012 - 6654.425: 36.6036% ( 235) 00:07:49.412 6654.425 - 6704.837: 38.0839% ( 180) 00:07:49.412 6704.837 - 6755.249: 39.3174% ( 150) 00:07:49.412 6755.249 - 6805.662: 40.1234% ( 98) 00:07:49.412 6805.662 - 6856.074: 40.8059% ( 83) 00:07:49.412 6856.074 - 6906.486: 41.3076% ( 61) 00:07:49.412 6906.486 - 6956.898: 41.7188% ( 50) 00:07:49.412 6956.898 - 7007.311: 42.0806% ( 44) 00:07:49.412 7007.311 - 7057.723: 42.4095% ( 40) 00:07:49.412 7057.723 - 7108.135: 42.7138% ( 37) 00:07:49.412 7108.135 - 7158.548: 42.9934% ( 34) 00:07:49.412 7158.548 - 7208.960: 43.3388% ( 42) 00:07:49.412 7208.960 - 7259.372: 43.6513% ( 38) 00:07:49.412 7259.372 - 7309.785: 43.9885% ( 41) 00:07:49.412 7309.785 - 7360.197: 44.2516% ( 32) 00:07:49.412 7360.197 - 7410.609: 44.5970% ( 42) 00:07:49.412 7410.609 - 7461.022: 44.9095% ( 38) 00:07:49.412 7461.022 - 7511.434: 45.1809% ( 33) 00:07:49.412 7511.434 - 7561.846: 45.5181% ( 41) 00:07:49.412 7561.846 - 7612.258: 45.8635% ( 42) 00:07:49.412 7612.258 - 7662.671: 46.1513% ( 35) 00:07:49.412 7662.671 - 7713.083: 46.4309% ( 34) 00:07:49.412 7713.083 - 7763.495: 46.7434% ( 38) 00:07:49.412 7763.495 - 7813.908: 46.9984% ( 31) 00:07:49.412 7813.908 - 7864.320: 47.2615% ( 32) 00:07:49.412 7864.320 - 7914.732: 47.5411% ( 34) 00:07:49.412 7914.732 - 7965.145: 47.7961% ( 31) 00:07:49.412 7965.145 - 8015.557: 48.1086% ( 38) 00:07:49.412 8015.557 - 8065.969: 48.3306% ( 27) 00:07:49.412 8065.969 - 8116.382: 48.5938% ( 32) 00:07:49.412 8116.382 - 8166.794: 48.8405% ( 30) 00:07:49.412 8166.794 - 8217.206: 49.0789% ( 29) 00:07:49.412 8217.206 - 8267.618: 49.2845% ( 25) 00:07:49.412 8267.618 - 8318.031: 49.5066% ( 27) 00:07:49.412 8318.031 - 8368.443: 49.7204% ( 26) 00:07:49.412 8368.443 - 8418.855: 49.9507% ( 28) 00:07:49.412 8418.855 - 8469.268: 50.2138% ( 32) 00:07:49.412 8469.268 - 8519.680: 50.4523% ( 29) 00:07:49.412 8519.680 - 8570.092: 50.6661% ( 26) 00:07:49.412 8570.092 - 8620.505: 50.8553% ( 23) 00:07:49.412 8620.505 - 8670.917: 51.0855% ( 28) 00:07:49.412 8670.917 - 8721.329: 51.2829% ( 24) 00:07:49.412 8721.329 - 8771.742: 51.4885% ( 25) 00:07:49.412 8771.742 - 8822.154: 51.7105% ( 27) 00:07:49.412 8822.154 - 8872.566: 51.8421% ( 16) 00:07:49.412 8872.566 - 8922.978: 52.0395% ( 24) 00:07:49.412 8922.978 - 8973.391: 52.2286% ( 23) 00:07:49.412 8973.391 - 9023.803: 52.3931% ( 20) 00:07:49.412 9023.803 - 9074.215: 52.5576% ( 20) 00:07:49.412 9074.215 - 9124.628: 52.7467% ( 23) 00:07:49.412 9124.628 - 9175.040: 52.9441% ( 24) 00:07:49.412 9175.040 - 9225.452: 53.0839% ( 17) 00:07:49.412 9225.452 - 9275.865: 53.2730% ( 23) 00:07:49.412 9275.865 - 9326.277: 53.4457% ( 21) 00:07:49.412 9326.277 - 9376.689: 53.6349% ( 23) 00:07:49.412 9376.689 - 9427.102: 53.8322% ( 24) 00:07:49.412 9427.102 - 9477.514: 53.9556% ( 15) 00:07:49.412 9477.514 - 9527.926: 54.2270% ( 33) 00:07:49.412 9527.926 - 9578.338: 54.3750% ( 18) 00:07:49.412 9578.338 - 9628.751: 54.5477% ( 21) 00:07:49.412 9628.751 - 9679.163: 54.6793% ( 16) 00:07:49.412 9679.163 - 9729.575: 54.8273% ( 18) 00:07:49.412 9729.575 - 9779.988: 54.9836% ( 19) 00:07:49.412 9779.988 - 9830.400: 55.1398% ( 19) 00:07:49.412 9830.400 - 9880.812: 55.3043% ( 20) 00:07:49.412 9880.812 - 9931.225: 55.4605% ( 19) 00:07:49.412 9931.225 - 9981.637: 55.6168% ( 19) 00:07:49.412 9981.637 - 10032.049: 55.7648% ( 18) 00:07:49.412 10032.049 - 10082.462: 55.8882% ( 15) 00:07:49.412 10082.462 - 10132.874: 56.0526% ( 20) 00:07:49.412 10132.874 - 10183.286: 56.1924% ( 17) 00:07:49.412 10183.286 - 10233.698: 56.3816% ( 23) 00:07:49.412 10233.698 - 10284.111: 56.5461% ( 20) 00:07:49.412 10284.111 - 10334.523: 56.7352% ( 23) 00:07:49.412 10334.523 - 10384.935: 56.8586% ( 15) 00:07:49.412 10384.935 - 10435.348: 56.9655% ( 13) 00:07:49.412 10435.348 - 10485.760: 57.0641% ( 12) 00:07:49.412 10485.760 - 10536.172: 57.2615% ( 24) 00:07:49.412 10536.172 - 10586.585: 57.3109% ( 6) 00:07:49.412 10586.585 - 10636.997: 57.4507% ( 17) 00:07:49.412 10636.997 - 10687.409: 57.5329% ( 10) 00:07:49.412 10687.409 - 10737.822: 57.6151% ( 10) 00:07:49.412 10737.822 - 10788.234: 57.7056% ( 11) 00:07:49.412 10788.234 - 10838.646: 57.7796% ( 9) 00:07:49.412 10838.646 - 10889.058: 57.8372% ( 7) 00:07:49.412 10889.058 - 10939.471: 57.9194% ( 10) 00:07:49.412 10939.471 - 10989.883: 57.9770% ( 7) 00:07:49.412 10989.883 - 11040.295: 58.0099% ( 4) 00:07:49.412 11040.295 - 11090.708: 58.0674% ( 7) 00:07:49.412 11090.708 - 11141.120: 58.1579% ( 11) 00:07:49.412 11141.120 - 11191.532: 58.2072% ( 6) 00:07:49.412 11191.532 - 11241.945: 58.2895% ( 10) 00:07:49.412 11241.945 - 11292.357: 58.3964% ( 13) 00:07:49.412 11292.357 - 11342.769: 58.4868% ( 11) 00:07:49.412 11342.769 - 11393.182: 58.5773% ( 11) 00:07:49.412 11393.182 - 11443.594: 58.6595% ( 10) 00:07:49.412 11443.594 - 11494.006: 58.7993% ( 17) 00:07:49.412 11494.006 - 11544.418: 58.8651% ( 8) 00:07:49.412 11544.418 - 11594.831: 58.9474% ( 10) 00:07:49.412 11594.831 - 11645.243: 59.1283% ( 22) 00:07:49.412 11645.243 - 11695.655: 59.2599% ( 16) 00:07:49.412 11695.655 - 11746.068: 59.3503% ( 11) 00:07:49.412 11746.068 - 11796.480: 59.4243% ( 9) 00:07:49.412 11796.480 - 11846.892: 59.5559% ( 16) 00:07:49.412 11846.892 - 11897.305: 59.7286% ( 21) 00:07:49.413 11897.305 - 11947.717: 59.8520% ( 15) 00:07:49.413 11947.717 - 11998.129: 60.0411% ( 23) 00:07:49.413 11998.129 - 12048.542: 60.1645% ( 15) 00:07:49.413 12048.542 - 12098.954: 60.3618% ( 24) 00:07:49.413 12098.954 - 12149.366: 60.4605% ( 12) 00:07:49.413 12149.366 - 12199.778: 60.6086% ( 18) 00:07:49.413 12199.778 - 12250.191: 60.8141% ( 25) 00:07:49.413 12250.191 - 12300.603: 60.8964% ( 10) 00:07:49.413 12300.603 - 12351.015: 60.9951% ( 12) 00:07:49.413 12351.015 - 12401.428: 61.1184% ( 15) 00:07:49.413 12401.428 - 12451.840: 61.2993% ( 22) 00:07:49.413 12451.840 - 12502.252: 61.3816% ( 10) 00:07:49.413 12502.252 - 12552.665: 61.4885% ( 13) 00:07:49.413 12552.665 - 12603.077: 61.6201% ( 16) 00:07:49.413 12603.077 - 12653.489: 61.6941% ( 9) 00:07:49.413 12653.489 - 12703.902: 61.7516% ( 7) 00:07:49.413 12703.902 - 12754.314: 61.8668% ( 14) 00:07:49.413 12754.314 - 12804.726: 61.9490% ( 10) 00:07:49.413 12804.726 - 12855.138: 62.0888% ( 17) 00:07:49.413 12855.138 - 12905.551: 62.1464% ( 7) 00:07:49.413 12905.551 - 13006.375: 62.3438% ( 24) 00:07:49.413 13006.375 - 13107.200: 62.5740% ( 28) 00:07:49.413 13107.200 - 13208.025: 62.8207% ( 30) 00:07:49.413 13208.025 - 13308.849: 63.1003% ( 34) 00:07:49.413 13308.849 - 13409.674: 63.4375% ( 41) 00:07:49.413 13409.674 - 13510.498: 63.7500% ( 38) 00:07:49.413 13510.498 - 13611.323: 64.1118% ( 44) 00:07:49.413 13611.323 - 13712.148: 64.8026% ( 84) 00:07:49.413 13712.148 - 13812.972: 65.2467% ( 54) 00:07:49.413 13812.972 - 13913.797: 65.8470% ( 73) 00:07:49.413 13913.797 - 14014.622: 66.6283% ( 95) 00:07:49.413 14014.622 - 14115.446: 67.4342% ( 98) 00:07:49.413 14115.446 - 14216.271: 68.3059% ( 106) 00:07:49.413 14216.271 - 14317.095: 69.2105% ( 110) 00:07:49.413 14317.095 - 14417.920: 69.9836% ( 94) 00:07:49.413 14417.920 - 14518.745: 70.8141% ( 101) 00:07:49.413 14518.745 - 14619.569: 71.6447% ( 101) 00:07:49.413 14619.569 - 14720.394: 72.4260% ( 95) 00:07:49.413 14720.394 - 14821.218: 73.3388% ( 111) 00:07:49.413 14821.218 - 14922.043: 74.0707% ( 89) 00:07:49.413 14922.043 - 15022.868: 74.9507% ( 107) 00:07:49.413 15022.868 - 15123.692: 75.8141% ( 105) 00:07:49.413 15123.692 - 15224.517: 76.4967% ( 83) 00:07:49.413 15224.517 - 15325.342: 77.3109% ( 99) 00:07:49.413 15325.342 - 15426.166: 78.0839% ( 94) 00:07:49.413 15426.166 - 15526.991: 78.8487% ( 93) 00:07:49.413 15526.991 - 15627.815: 79.7368% ( 108) 00:07:49.413 15627.815 - 15728.640: 80.5921% ( 104) 00:07:49.413 15728.640 - 15829.465: 81.4967% ( 110) 00:07:49.413 15829.465 - 15930.289: 82.6727% ( 143) 00:07:49.413 15930.289 - 16031.114: 83.5526% ( 107) 00:07:49.413 16031.114 - 16131.938: 84.5970% ( 127) 00:07:49.413 16131.938 - 16232.763: 85.4441% ( 103) 00:07:49.413 16232.763 - 16333.588: 86.1924% ( 91) 00:07:49.413 16333.588 - 16434.412: 87.2862% ( 133) 00:07:49.413 16434.412 - 16535.237: 88.4622% ( 143) 00:07:49.413 16535.237 - 16636.062: 89.1859% ( 88) 00:07:49.413 16636.062 - 16736.886: 90.2138% ( 125) 00:07:49.413 16736.886 - 16837.711: 91.1349% ( 112) 00:07:49.413 16837.711 - 16938.535: 92.0641% ( 113) 00:07:49.413 16938.535 - 17039.360: 92.9194% ( 104) 00:07:49.413 17039.360 - 17140.185: 93.9967% ( 131) 00:07:49.413 17140.185 - 17241.009: 94.8766% ( 107) 00:07:49.413 17241.009 - 17341.834: 95.3865% ( 62) 00:07:49.413 17341.834 - 17442.658: 95.9375% ( 67) 00:07:49.413 17442.658 - 17543.483: 96.4803% ( 66) 00:07:49.413 17543.483 - 17644.308: 97.0477% ( 69) 00:07:49.413 17644.308 - 17745.132: 97.4178% ( 45) 00:07:49.413 17745.132 - 17845.957: 97.7303% ( 38) 00:07:49.413 17845.957 - 17946.782: 97.8865% ( 19) 00:07:49.413 17946.782 - 18047.606: 98.0757% ( 23) 00:07:49.413 18047.606 - 18148.431: 98.2401% ( 20) 00:07:49.413 18148.431 - 18249.255: 98.3553% ( 14) 00:07:49.413 18249.255 - 18350.080: 98.4457% ( 11) 00:07:49.413 18350.080 - 18450.905: 98.5609% ( 14) 00:07:49.413 18450.905 - 18551.729: 98.6924% ( 16) 00:07:49.413 18551.729 - 18652.554: 98.7336% ( 5) 00:07:49.413 18652.554 - 18753.378: 98.7747% ( 5) 00:07:49.413 18753.378 - 18854.203: 98.8651% ( 11) 00:07:49.413 18854.203 - 18955.028: 98.8898% ( 3) 00:07:49.413 18955.028 - 19055.852: 98.9145% ( 3) 00:07:49.413 19055.852 - 19156.677: 98.9474% ( 4) 00:07:49.413 28230.892 - 28432.542: 98.9885% ( 5) 00:07:49.413 28432.542 - 28634.191: 99.0461% ( 7) 00:07:49.413 28634.191 - 28835.840: 99.1036% ( 7) 00:07:49.413 28835.840 - 29037.489: 99.1612% ( 7) 00:07:49.413 29037.489 - 29239.138: 99.2270% ( 8) 00:07:49.413 29239.138 - 29440.788: 99.2845% ( 7) 00:07:49.413 29440.788 - 29642.437: 99.3339% ( 6) 00:07:49.413 29642.437 - 29844.086: 99.4079% ( 9) 00:07:49.413 29844.086 - 30045.735: 99.4572% ( 6) 00:07:49.413 30045.735 - 30247.385: 99.4737% ( 2) 00:07:49.413 35893.563 - 36095.212: 99.4819% ( 1) 00:07:49.413 36095.212 - 36296.862: 99.5477% ( 8) 00:07:49.413 36296.862 - 36498.511: 99.6053% ( 7) 00:07:49.413 36498.511 - 36700.160: 99.6628% ( 7) 00:07:49.413 36700.160 - 36901.809: 99.7204% ( 7) 00:07:49.413 36901.809 - 37103.458: 99.7697% ( 6) 00:07:49.413 37103.458 - 37305.108: 99.8191% ( 6) 00:07:49.413 37305.108 - 37506.757: 99.9013% ( 10) 00:07:49.413 37506.757 - 37708.406: 99.9589% ( 7) 00:07:49.413 37708.406 - 37910.055: 100.0000% ( 5) 00:07:49.413 00:07:49.413 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:49.413 ============================================================================== 00:07:49.413 Range in us Cumulative IO count 00:07:49.413 5646.178 - 5671.385: 0.0164% ( 2) 00:07:49.413 5671.385 - 5696.591: 0.1645% ( 18) 00:07:49.413 5696.591 - 5721.797: 0.4194% ( 31) 00:07:49.413 5721.797 - 5747.003: 0.7155% ( 36) 00:07:49.413 5747.003 - 5772.209: 1.1184% ( 49) 00:07:49.413 5772.209 - 5797.415: 1.7023% ( 71) 00:07:49.413 5797.415 - 5822.622: 2.4178% ( 87) 00:07:49.413 5822.622 - 5847.828: 3.1497% ( 89) 00:07:49.413 5847.828 - 5873.034: 3.9556% ( 98) 00:07:49.413 5873.034 - 5898.240: 4.9013% ( 115) 00:07:49.413 5898.240 - 5923.446: 5.8717% ( 118) 00:07:49.413 5923.446 - 5948.652: 6.8421% ( 118) 00:07:49.413 5948.652 - 5973.858: 7.8454% ( 122) 00:07:49.413 5973.858 - 5999.065: 8.9062% ( 129) 00:07:49.413 5999.065 - 6024.271: 9.8931% ( 120) 00:07:49.413 6024.271 - 6049.477: 11.0691% ( 143) 00:07:49.413 6049.477 - 6074.683: 12.1957% ( 137) 00:07:49.413 6074.683 - 6099.889: 13.4128% ( 148) 00:07:49.413 6099.889 - 6125.095: 14.5395% ( 137) 00:07:49.413 6125.095 - 6150.302: 15.6990% ( 141) 00:07:49.413 6150.302 - 6175.508: 16.9243% ( 149) 00:07:49.413 6175.508 - 6200.714: 18.0839% ( 141) 00:07:49.413 6200.714 - 6225.920: 19.2434% ( 141) 00:07:49.413 6225.920 - 6251.126: 20.4441% ( 146) 00:07:49.413 6251.126 - 6276.332: 21.5625% ( 136) 00:07:49.413 6276.332 - 6301.538: 22.7303% ( 142) 00:07:49.413 6301.538 - 6326.745: 23.8898% ( 141) 00:07:49.413 6326.745 - 6351.951: 25.1398% ( 152) 00:07:49.413 6351.951 - 6377.157: 26.3158% ( 143) 00:07:49.413 6377.157 - 6402.363: 27.5247% ( 147) 00:07:49.413 6402.363 - 6427.569: 28.8076% ( 156) 00:07:49.413 6427.569 - 6452.775: 30.0082% ( 146) 00:07:49.413 6452.775 - 6503.188: 32.3520% ( 285) 00:07:49.413 6503.188 - 6553.600: 34.4737% ( 258) 00:07:49.413 6553.600 - 6604.012: 36.3569% ( 229) 00:07:49.413 6604.012 - 6654.425: 37.8701% ( 184) 00:07:49.413 6654.425 - 6704.837: 38.8980% ( 125) 00:07:49.413 6704.837 - 6755.249: 39.6546% ( 92) 00:07:49.413 6755.249 - 6805.662: 40.3701% ( 87) 00:07:49.413 6805.662 - 6856.074: 40.9128% ( 66) 00:07:49.413 6856.074 - 6906.486: 41.2829% ( 45) 00:07:49.413 6906.486 - 6956.898: 41.6283% ( 42) 00:07:49.413 6956.898 - 7007.311: 42.0148% ( 47) 00:07:49.413 7007.311 - 7057.723: 42.3684% ( 43) 00:07:49.413 7057.723 - 7108.135: 42.7467% ( 46) 00:07:49.413 7108.135 - 7158.548: 43.1579% ( 50) 00:07:49.413 7158.548 - 7208.960: 43.4622% ( 37) 00:07:49.413 7208.960 - 7259.372: 43.8322% ( 45) 00:07:49.413 7259.372 - 7309.785: 44.2023% ( 45) 00:07:49.413 7309.785 - 7360.197: 44.5724% ( 45) 00:07:49.413 7360.197 - 7410.609: 44.9260% ( 43) 00:07:49.413 7410.609 - 7461.022: 45.2714% ( 42) 00:07:49.413 7461.022 - 7511.434: 45.6250% ( 43) 00:07:49.413 7511.434 - 7561.846: 46.0197% ( 48) 00:07:49.413 7561.846 - 7612.258: 46.4227% ( 49) 00:07:49.413 7612.258 - 7662.671: 46.8010% ( 46) 00:07:49.413 7662.671 - 7713.083: 47.2204% ( 51) 00:07:49.413 7713.083 - 7763.495: 47.5576% ( 41) 00:07:49.413 7763.495 - 7813.908: 47.8372% ( 34) 00:07:49.413 7813.908 - 7864.320: 48.1497% ( 38) 00:07:49.413 7864.320 - 7914.732: 48.4128% ( 32) 00:07:49.413 7914.732 - 7965.145: 48.6513% ( 29) 00:07:49.413 7965.145 - 8015.557: 48.8816% ( 28) 00:07:49.413 8015.557 - 8065.969: 49.0954% ( 26) 00:07:49.413 8065.969 - 8116.382: 49.3174% ( 27) 00:07:49.413 8116.382 - 8166.794: 49.4901% ( 21) 00:07:49.413 8166.794 - 8217.206: 49.6957% ( 25) 00:07:49.414 8217.206 - 8267.618: 49.8684% ( 21) 00:07:49.414 8267.618 - 8318.031: 50.0000% ( 16) 00:07:49.414 8318.031 - 8368.443: 50.2056% ( 25) 00:07:49.414 8368.443 - 8418.855: 50.3865% ( 22) 00:07:49.414 8418.855 - 8469.268: 50.5592% ( 21) 00:07:49.414 8469.268 - 8519.680: 50.7237% ( 20) 00:07:49.414 8519.680 - 8570.092: 50.8799% ( 19) 00:07:49.414 8570.092 - 8620.505: 51.0444% ( 20) 00:07:49.414 8620.505 - 8670.917: 51.2418% ( 24) 00:07:49.414 8670.917 - 8721.329: 51.4145% ( 21) 00:07:49.414 8721.329 - 8771.742: 51.5214% ( 13) 00:07:49.414 8771.742 - 8822.154: 51.6694% ( 18) 00:07:49.414 8822.154 - 8872.566: 51.8092% ( 17) 00:07:49.414 8872.566 - 8922.978: 51.9243% ( 14) 00:07:49.414 8922.978 - 8973.391: 52.0888% ( 20) 00:07:49.414 8973.391 - 9023.803: 52.2368% ( 18) 00:07:49.414 9023.803 - 9074.215: 52.4095% ( 21) 00:07:49.414 9074.215 - 9124.628: 52.5658% ( 19) 00:07:49.414 9124.628 - 9175.040: 52.7303% ( 20) 00:07:49.414 9175.040 - 9225.452: 52.8865% ( 19) 00:07:49.414 9225.452 - 9275.865: 53.0428% ( 19) 00:07:49.414 9275.865 - 9326.277: 53.2319% ( 23) 00:07:49.414 9326.277 - 9376.689: 53.3717% ( 17) 00:07:49.414 9376.689 - 9427.102: 53.4868% ( 14) 00:07:49.414 9427.102 - 9477.514: 53.5691% ( 10) 00:07:49.414 9477.514 - 9527.926: 53.6349% ( 8) 00:07:49.414 9527.926 - 9578.338: 53.7089% ( 9) 00:07:49.414 9578.338 - 9628.751: 53.8322% ( 15) 00:07:49.414 9628.751 - 9679.163: 53.9145% ( 10) 00:07:49.414 9679.163 - 9729.575: 54.0132% ( 12) 00:07:49.414 9729.575 - 9779.988: 54.1365% ( 15) 00:07:49.414 9779.988 - 9830.400: 54.2763% ( 17) 00:07:49.414 9830.400 - 9880.812: 54.3750% ( 12) 00:07:49.414 9880.812 - 9931.225: 54.4901% ( 14) 00:07:49.414 9931.225 - 9981.637: 54.5970% ( 13) 00:07:49.414 9981.637 - 10032.049: 54.7286% ( 16) 00:07:49.414 10032.049 - 10082.462: 54.8849% ( 19) 00:07:49.414 10082.462 - 10132.874: 55.0247% ( 17) 00:07:49.414 10132.874 - 10183.286: 55.2138% ( 23) 00:07:49.414 10183.286 - 10233.698: 55.4523% ( 29) 00:07:49.414 10233.698 - 10284.111: 55.6661% ( 26) 00:07:49.414 10284.111 - 10334.523: 55.8964% ( 28) 00:07:49.414 10334.523 - 10384.935: 56.1020% ( 25) 00:07:49.414 10384.935 - 10435.348: 56.2829% ( 22) 00:07:49.414 10435.348 - 10485.760: 56.4474% ( 20) 00:07:49.414 10485.760 - 10536.172: 56.6447% ( 24) 00:07:49.414 10536.172 - 10586.585: 56.7928% ( 18) 00:07:49.414 10586.585 - 10636.997: 56.9984% ( 25) 00:07:49.414 10636.997 - 10687.409: 57.1628% ( 20) 00:07:49.414 10687.409 - 10737.822: 57.2944% ( 16) 00:07:49.414 10737.822 - 10788.234: 57.4589% ( 20) 00:07:49.414 10788.234 - 10838.646: 57.5822% ( 15) 00:07:49.414 10838.646 - 10889.058: 57.7467% ( 20) 00:07:49.414 10889.058 - 10939.471: 57.8454% ( 12) 00:07:49.414 10939.471 - 10989.883: 57.9359% ( 11) 00:07:49.414 10989.883 - 11040.295: 58.0181% ( 10) 00:07:49.414 11040.295 - 11090.708: 58.1086% ( 11) 00:07:49.414 11090.708 - 11141.120: 58.2319% ( 15) 00:07:49.414 11141.120 - 11191.532: 58.3553% ( 15) 00:07:49.414 11191.532 - 11241.945: 58.4786% ( 15) 00:07:49.414 11241.945 - 11292.357: 58.6102% ( 16) 00:07:49.414 11292.357 - 11342.769: 58.7336% ( 15) 00:07:49.414 11342.769 - 11393.182: 58.8158% ( 10) 00:07:49.414 11393.182 - 11443.594: 58.9309% ( 14) 00:07:49.414 11443.594 - 11494.006: 59.0132% ( 10) 00:07:49.414 11494.006 - 11544.418: 59.1201% ( 13) 00:07:49.414 11544.418 - 11594.831: 59.2023% ( 10) 00:07:49.414 11594.831 - 11645.243: 59.3092% ( 13) 00:07:49.414 11645.243 - 11695.655: 59.4079% ( 12) 00:07:49.414 11695.655 - 11746.068: 59.5066% ( 12) 00:07:49.414 11746.068 - 11796.480: 59.6217% ( 14) 00:07:49.414 11796.480 - 11846.892: 59.7533% ( 16) 00:07:49.414 11846.892 - 11897.305: 59.9178% ( 20) 00:07:49.414 11897.305 - 11947.717: 60.1151% ( 24) 00:07:49.414 11947.717 - 11998.129: 60.2714% ( 19) 00:07:49.414 11998.129 - 12048.542: 60.3947% ( 15) 00:07:49.414 12048.542 - 12098.954: 60.4688% ( 9) 00:07:49.414 12098.954 - 12149.366: 60.5263% ( 7) 00:07:49.414 12149.366 - 12199.778: 60.5674% ( 5) 00:07:49.414 12199.778 - 12250.191: 60.6003% ( 4) 00:07:49.414 12250.191 - 12300.603: 60.6414% ( 5) 00:07:49.414 12300.603 - 12351.015: 60.6908% ( 6) 00:07:49.414 12351.015 - 12401.428: 60.7730% ( 10) 00:07:49.414 12401.428 - 12451.840: 60.8964% ( 15) 00:07:49.414 12451.840 - 12502.252: 61.0280% ( 16) 00:07:49.414 12502.252 - 12552.665: 61.1431% ( 14) 00:07:49.414 12552.665 - 12603.077: 61.2993% ( 19) 00:07:49.414 12603.077 - 12653.489: 61.4803% ( 22) 00:07:49.414 12653.489 - 12703.902: 61.6283% ( 18) 00:07:49.414 12703.902 - 12754.314: 61.7928% ( 20) 00:07:49.414 12754.314 - 12804.726: 61.9243% ( 16) 00:07:49.414 12804.726 - 12855.138: 62.0559% ( 16) 00:07:49.414 12855.138 - 12905.551: 62.2039% ( 18) 00:07:49.414 12905.551 - 13006.375: 62.5329% ( 40) 00:07:49.414 13006.375 - 13107.200: 62.8618% ( 40) 00:07:49.414 13107.200 - 13208.025: 63.1743% ( 38) 00:07:49.414 13208.025 - 13308.849: 63.4539% ( 34) 00:07:49.414 13308.849 - 13409.674: 63.7500% ( 36) 00:07:49.414 13409.674 - 13510.498: 64.2516% ( 61) 00:07:49.414 13510.498 - 13611.323: 64.6957% ( 54) 00:07:49.414 13611.323 - 13712.148: 65.2056% ( 62) 00:07:49.414 13712.148 - 13812.972: 65.8306% ( 76) 00:07:49.414 13812.972 - 13913.797: 66.4556% ( 76) 00:07:49.414 13913.797 - 14014.622: 66.9737% ( 63) 00:07:49.414 14014.622 - 14115.446: 67.6151% ( 78) 00:07:49.414 14115.446 - 14216.271: 68.2566% ( 78) 00:07:49.414 14216.271 - 14317.095: 68.9391% ( 83) 00:07:49.414 14317.095 - 14417.920: 69.7451% ( 98) 00:07:49.414 14417.920 - 14518.745: 70.6250% ( 107) 00:07:49.414 14518.745 - 14619.569: 71.4967% ( 106) 00:07:49.414 14619.569 - 14720.394: 72.2615% ( 93) 00:07:49.414 14720.394 - 14821.218: 73.1826% ( 112) 00:07:49.414 14821.218 - 14922.043: 73.9803% ( 97) 00:07:49.414 14922.043 - 15022.868: 74.8684% ( 108) 00:07:49.414 15022.868 - 15123.692: 75.7401% ( 106) 00:07:49.414 15123.692 - 15224.517: 76.5872% ( 103) 00:07:49.414 15224.517 - 15325.342: 77.4507% ( 105) 00:07:49.414 15325.342 - 15426.166: 78.2648% ( 99) 00:07:49.414 15426.166 - 15526.991: 79.0625% ( 97) 00:07:49.414 15526.991 - 15627.815: 79.8438% ( 95) 00:07:49.414 15627.815 - 15728.640: 80.5674% ( 88) 00:07:49.414 15728.640 - 15829.465: 81.5049% ( 114) 00:07:49.414 15829.465 - 15930.289: 82.4836% ( 119) 00:07:49.414 15930.289 - 16031.114: 83.4211% ( 114) 00:07:49.414 16031.114 - 16131.938: 84.4326% ( 123) 00:07:49.414 16131.938 - 16232.763: 85.4770% ( 127) 00:07:49.414 16232.763 - 16333.588: 86.4885% ( 123) 00:07:49.414 16333.588 - 16434.412: 87.7714% ( 156) 00:07:49.414 16434.412 - 16535.237: 88.8076% ( 126) 00:07:49.414 16535.237 - 16636.062: 89.7122% ( 110) 00:07:49.414 16636.062 - 16736.886: 90.5921% ( 107) 00:07:49.414 16736.886 - 16837.711: 91.4145% ( 100) 00:07:49.414 16837.711 - 16938.535: 92.2122% ( 97) 00:07:49.414 16938.535 - 17039.360: 93.0510% ( 102) 00:07:49.414 17039.360 - 17140.185: 93.7089% ( 80) 00:07:49.414 17140.185 - 17241.009: 94.3421% ( 77) 00:07:49.414 17241.009 - 17341.834: 94.9753% ( 77) 00:07:49.414 17341.834 - 17442.658: 95.7730% ( 97) 00:07:49.414 17442.658 - 17543.483: 96.4227% ( 79) 00:07:49.414 17543.483 - 17644.308: 96.8997% ( 58) 00:07:49.414 17644.308 - 17745.132: 97.2533% ( 43) 00:07:49.414 17745.132 - 17845.957: 97.6234% ( 45) 00:07:49.414 17845.957 - 17946.782: 97.9030% ( 34) 00:07:49.414 17946.782 - 18047.606: 98.1250% ( 27) 00:07:49.414 18047.606 - 18148.431: 98.3224% ( 24) 00:07:49.414 18148.431 - 18249.255: 98.4622% ( 17) 00:07:49.414 18249.255 - 18350.080: 98.5855% ( 15) 00:07:49.414 18350.080 - 18450.905: 98.6678% ( 10) 00:07:49.414 18450.905 - 18551.729: 98.7418% ( 9) 00:07:49.414 18551.729 - 18652.554: 98.7911% ( 6) 00:07:49.414 18652.554 - 18753.378: 98.8240% ( 4) 00:07:49.414 18753.378 - 18854.203: 98.8569% ( 4) 00:07:49.414 18854.203 - 18955.028: 98.8980% ( 5) 00:07:49.414 18955.028 - 19055.852: 98.9391% ( 5) 00:07:49.414 19055.852 - 19156.677: 98.9474% ( 1) 00:07:49.414 26617.698 - 26819.348: 98.9967% ( 6) 00:07:49.414 26819.348 - 27020.997: 99.0543% ( 7) 00:07:49.414 27020.997 - 27222.646: 99.1201% ( 8) 00:07:49.414 27222.646 - 27424.295: 99.1776% ( 7) 00:07:49.414 27424.295 - 27625.945: 99.2434% ( 8) 00:07:49.414 27625.945 - 27827.594: 99.3092% ( 8) 00:07:49.414 27827.594 - 28029.243: 99.3668% ( 7) 00:07:49.414 28029.243 - 28230.892: 99.4326% ( 8) 00:07:49.414 28230.892 - 28432.542: 99.4737% ( 5) 00:07:49.414 34482.018 - 34683.668: 99.5148% ( 5) 00:07:49.414 34683.668 - 34885.317: 99.5724% ( 7) 00:07:49.415 34885.317 - 35086.966: 99.6382% ( 8) 00:07:49.415 35086.966 - 35288.615: 99.6957% ( 7) 00:07:49.415 35288.615 - 35490.265: 99.7615% ( 8) 00:07:49.415 35490.265 - 35691.914: 99.8191% ( 7) 00:07:49.415 35691.914 - 35893.563: 99.8849% ( 8) 00:07:49.415 35893.563 - 36095.212: 99.9424% ( 7) 00:07:49.415 36095.212 - 36296.862: 100.0000% ( 7) 00:07:49.415 00:07:49.415 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:49.415 ============================================================================== 00:07:49.415 Range in us Cumulative IO count 00:07:49.415 5671.385 - 5696.591: 0.0329% ( 4) 00:07:49.415 5696.591 - 5721.797: 0.1974% ( 20) 00:07:49.415 5721.797 - 5747.003: 0.3865% ( 23) 00:07:49.415 5747.003 - 5772.209: 0.9046% ( 63) 00:07:49.415 5772.209 - 5797.415: 1.5872% ( 83) 00:07:49.415 5797.415 - 5822.622: 2.1957% ( 74) 00:07:49.415 5822.622 - 5847.828: 2.9441% ( 91) 00:07:49.415 5847.828 - 5873.034: 3.8405% ( 109) 00:07:49.415 5873.034 - 5898.240: 4.8766% ( 126) 00:07:49.415 5898.240 - 5923.446: 5.9868% ( 135) 00:07:49.415 5923.446 - 5948.652: 6.9408% ( 116) 00:07:49.415 5948.652 - 5973.858: 8.1003% ( 141) 00:07:49.415 5973.858 - 5999.065: 9.1776% ( 131) 00:07:49.415 5999.065 - 6024.271: 10.2714% ( 133) 00:07:49.415 6024.271 - 6049.477: 11.3898% ( 136) 00:07:49.415 6049.477 - 6074.683: 12.5329% ( 139) 00:07:49.415 6074.683 - 6099.889: 13.7582% ( 149) 00:07:49.415 6099.889 - 6125.095: 14.9589% ( 146) 00:07:49.415 6125.095 - 6150.302: 16.1595% ( 146) 00:07:49.415 6150.302 - 6175.508: 17.3026% ( 139) 00:07:49.415 6175.508 - 6200.714: 18.4293% ( 137) 00:07:49.415 6200.714 - 6225.920: 19.5559% ( 137) 00:07:49.415 6225.920 - 6251.126: 20.7648% ( 147) 00:07:49.415 6251.126 - 6276.332: 21.9490% ( 144) 00:07:49.415 6276.332 - 6301.538: 23.1250% ( 143) 00:07:49.415 6301.538 - 6326.745: 24.2516% ( 137) 00:07:49.415 6326.745 - 6351.951: 25.3947% ( 139) 00:07:49.415 6351.951 - 6377.157: 26.5543% ( 141) 00:07:49.415 6377.157 - 6402.363: 27.7385% ( 144) 00:07:49.415 6402.363 - 6427.569: 28.8569% ( 136) 00:07:49.415 6427.569 - 6452.775: 30.0658% ( 147) 00:07:49.415 6452.775 - 6503.188: 32.4507% ( 290) 00:07:49.415 6503.188 - 6553.600: 34.8355% ( 290) 00:07:49.415 6553.600 - 6604.012: 36.6118% ( 216) 00:07:49.415 6604.012 - 6654.425: 38.0428% ( 174) 00:07:49.415 6654.425 - 6704.837: 39.0625% ( 124) 00:07:49.415 6704.837 - 6755.249: 39.7039% ( 78) 00:07:49.415 6755.249 - 6805.662: 40.3043% ( 73) 00:07:49.415 6805.662 - 6856.074: 40.8882% ( 71) 00:07:49.415 6856.074 - 6906.486: 41.2993% ( 50) 00:07:49.415 6906.486 - 6956.898: 41.6941% ( 48) 00:07:49.415 6956.898 - 7007.311: 42.1464% ( 55) 00:07:49.415 7007.311 - 7057.723: 42.5987% ( 55) 00:07:49.415 7057.723 - 7108.135: 42.9441% ( 42) 00:07:49.415 7108.135 - 7158.548: 43.3141% ( 45) 00:07:49.415 7158.548 - 7208.960: 43.6678% ( 43) 00:07:49.415 7208.960 - 7259.372: 44.0461% ( 46) 00:07:49.415 7259.372 - 7309.785: 44.3668% ( 39) 00:07:49.415 7309.785 - 7360.197: 44.7122% ( 42) 00:07:49.415 7360.197 - 7410.609: 45.0576% ( 42) 00:07:49.415 7410.609 - 7461.022: 45.3536% ( 36) 00:07:49.415 7461.022 - 7511.434: 45.6579% ( 37) 00:07:49.415 7511.434 - 7561.846: 45.9704% ( 38) 00:07:49.415 7561.846 - 7612.258: 46.2747% ( 37) 00:07:49.415 7612.258 - 7662.671: 46.6283% ( 43) 00:07:49.415 7662.671 - 7713.083: 46.8668% ( 29) 00:07:49.415 7713.083 - 7763.495: 47.1053% ( 29) 00:07:49.415 7763.495 - 7813.908: 47.3026% ( 24) 00:07:49.415 7813.908 - 7864.320: 47.4424% ( 17) 00:07:49.415 7864.320 - 7914.732: 47.5576% ( 14) 00:07:49.415 7914.732 - 7965.145: 47.6974% ( 17) 00:07:49.415 7965.145 - 8015.557: 47.8372% ( 17) 00:07:49.415 8015.557 - 8065.969: 47.9770% ( 17) 00:07:49.415 8065.969 - 8116.382: 48.1579% ( 22) 00:07:49.415 8116.382 - 8166.794: 48.3964% ( 29) 00:07:49.415 8166.794 - 8217.206: 48.6513% ( 31) 00:07:49.415 8217.206 - 8267.618: 48.9885% ( 41) 00:07:49.415 8267.618 - 8318.031: 49.2681% ( 34) 00:07:49.415 8318.031 - 8368.443: 49.5641% ( 36) 00:07:49.415 8368.443 - 8418.855: 49.8602% ( 36) 00:07:49.415 8418.855 - 8469.268: 50.1727% ( 38) 00:07:49.415 8469.268 - 8519.680: 50.4441% ( 33) 00:07:49.415 8519.680 - 8570.092: 50.7155% ( 33) 00:07:49.415 8570.092 - 8620.505: 50.9375% ( 27) 00:07:49.415 8620.505 - 8670.917: 51.1924% ( 31) 00:07:49.415 8670.917 - 8721.329: 51.4391% ( 30) 00:07:49.415 8721.329 - 8771.742: 51.6612% ( 27) 00:07:49.415 8771.742 - 8822.154: 51.8586% ( 24) 00:07:49.415 8822.154 - 8872.566: 52.0395% ( 22) 00:07:49.415 8872.566 - 8922.978: 52.2039% ( 20) 00:07:49.415 8922.978 - 8973.391: 52.3849% ( 22) 00:07:49.415 8973.391 - 9023.803: 52.5082% ( 15) 00:07:49.415 9023.803 - 9074.215: 52.6316% ( 15) 00:07:49.415 9074.215 - 9124.628: 52.7961% ( 20) 00:07:49.415 9124.628 - 9175.040: 52.9605% ( 20) 00:07:49.415 9175.040 - 9225.452: 53.1003% ( 17) 00:07:49.415 9225.452 - 9275.865: 53.2237% ( 15) 00:07:49.415 9275.865 - 9326.277: 53.3306% ( 13) 00:07:49.415 9326.277 - 9376.689: 53.4293% ( 12) 00:07:49.415 9376.689 - 9427.102: 53.5197% ( 11) 00:07:49.415 9427.102 - 9477.514: 53.6020% ( 10) 00:07:49.415 9477.514 - 9527.926: 53.6760% ( 9) 00:07:49.415 9527.926 - 9578.338: 53.8322% ( 19) 00:07:49.415 9578.338 - 9628.751: 53.9474% ( 14) 00:07:49.415 9628.751 - 9679.163: 54.0789% ( 16) 00:07:49.415 9679.163 - 9729.575: 54.1859% ( 13) 00:07:49.415 9729.575 - 9779.988: 54.3010% ( 14) 00:07:49.415 9779.988 - 9830.400: 54.4326% ( 16) 00:07:49.415 9830.400 - 9880.812: 54.5477% ( 14) 00:07:49.415 9880.812 - 9931.225: 54.6464% ( 12) 00:07:49.415 9931.225 - 9981.637: 54.8191% ( 21) 00:07:49.415 9981.637 - 10032.049: 54.9918% ( 21) 00:07:49.415 10032.049 - 10082.462: 55.1562% ( 20) 00:07:49.415 10082.462 - 10132.874: 55.2714% ( 14) 00:07:49.415 10132.874 - 10183.286: 55.3947% ( 15) 00:07:49.415 10183.286 - 10233.698: 55.5592% ( 20) 00:07:49.415 10233.698 - 10284.111: 55.6908% ( 16) 00:07:49.415 10284.111 - 10334.523: 55.8059% ( 14) 00:07:49.415 10334.523 - 10384.935: 55.9539% ( 18) 00:07:49.415 10384.935 - 10435.348: 56.1184% ( 20) 00:07:49.415 10435.348 - 10485.760: 56.2747% ( 19) 00:07:49.415 10485.760 - 10536.172: 56.4474% ( 21) 00:07:49.415 10536.172 - 10586.585: 56.6365% ( 23) 00:07:49.415 10586.585 - 10636.997: 56.8010% ( 20) 00:07:49.415 10636.997 - 10687.409: 56.9984% ( 24) 00:07:49.415 10687.409 - 10737.822: 57.1464% ( 18) 00:07:49.415 10737.822 - 10788.234: 57.2615% ( 14) 00:07:49.415 10788.234 - 10838.646: 57.3438% ( 10) 00:07:49.415 10838.646 - 10889.058: 57.4424% ( 12) 00:07:49.415 10889.058 - 10939.471: 57.5082% ( 8) 00:07:49.415 10939.471 - 10989.883: 57.5658% ( 7) 00:07:49.415 10989.883 - 11040.295: 57.6316% ( 8) 00:07:49.415 11040.295 - 11090.708: 57.7056% ( 9) 00:07:49.415 11090.708 - 11141.120: 57.7796% ( 9) 00:07:49.415 11141.120 - 11191.532: 57.8289% ( 6) 00:07:49.415 11191.532 - 11241.945: 57.8701% ( 5) 00:07:49.415 11241.945 - 11292.357: 57.9276% ( 7) 00:07:49.415 11292.357 - 11342.769: 58.0263% ( 12) 00:07:49.415 11342.769 - 11393.182: 58.1579% ( 16) 00:07:49.415 11393.182 - 11443.594: 58.3141% ( 19) 00:07:49.415 11443.594 - 11494.006: 58.4539% ( 17) 00:07:49.415 11494.006 - 11544.418: 58.5609% ( 13) 00:07:49.415 11544.418 - 11594.831: 58.7253% ( 20) 00:07:49.415 11594.831 - 11645.243: 58.8569% ( 16) 00:07:49.415 11645.243 - 11695.655: 59.0296% ( 21) 00:07:49.415 11695.655 - 11746.068: 59.2105% ( 22) 00:07:49.416 11746.068 - 11796.480: 59.3914% ( 22) 00:07:49.416 11796.480 - 11846.892: 59.5641% ( 21) 00:07:49.416 11846.892 - 11897.305: 59.7286% ( 20) 00:07:49.416 11897.305 - 11947.717: 59.9178% ( 23) 00:07:49.416 11947.717 - 11998.129: 60.1151% ( 24) 00:07:49.416 11998.129 - 12048.542: 60.3207% ( 25) 00:07:49.416 12048.542 - 12098.954: 60.5263% ( 25) 00:07:49.416 12098.954 - 12149.366: 60.7401% ( 26) 00:07:49.416 12149.366 - 12199.778: 60.9375% ( 24) 00:07:49.416 12199.778 - 12250.191: 61.1102% ( 21) 00:07:49.416 12250.191 - 12300.603: 61.2993% ( 23) 00:07:49.416 12300.603 - 12351.015: 61.4474% ( 18) 00:07:49.416 12351.015 - 12401.428: 61.6201% ( 21) 00:07:49.416 12401.428 - 12451.840: 61.7270% ( 13) 00:07:49.416 12451.840 - 12502.252: 61.8010% ( 9) 00:07:49.416 12502.252 - 12552.665: 61.8832% ( 10) 00:07:49.416 12552.665 - 12603.077: 61.9408% ( 7) 00:07:49.416 12603.077 - 12653.489: 62.0148% ( 9) 00:07:49.416 12653.489 - 12703.902: 62.0888% ( 9) 00:07:49.416 12703.902 - 12754.314: 62.1464% ( 7) 00:07:49.416 12754.314 - 12804.726: 62.2039% ( 7) 00:07:49.416 12804.726 - 12855.138: 62.2615% ( 7) 00:07:49.416 12855.138 - 12905.551: 62.3438% ( 10) 00:07:49.416 12905.551 - 13006.375: 62.5082% ( 20) 00:07:49.416 13006.375 - 13107.200: 62.7056% ( 24) 00:07:49.416 13107.200 - 13208.025: 62.8783% ( 21) 00:07:49.416 13208.025 - 13308.849: 63.1579% ( 34) 00:07:49.416 13308.849 - 13409.674: 63.4622% ( 37) 00:07:49.416 13409.674 - 13510.498: 63.9062% ( 54) 00:07:49.416 13510.498 - 13611.323: 64.4243% ( 63) 00:07:49.416 13611.323 - 13712.148: 64.9836% ( 68) 00:07:49.416 13712.148 - 13812.972: 65.6250% ( 78) 00:07:49.416 13812.972 - 13913.797: 66.3734% ( 91) 00:07:49.416 13913.797 - 14014.622: 67.1382% ( 93) 00:07:49.416 14014.622 - 14115.446: 67.8043% ( 81) 00:07:49.416 14115.446 - 14216.271: 68.4951% ( 84) 00:07:49.416 14216.271 - 14317.095: 69.2845% ( 96) 00:07:49.416 14317.095 - 14417.920: 70.2138% ( 113) 00:07:49.416 14417.920 - 14518.745: 71.0033% ( 96) 00:07:49.416 14518.745 - 14619.569: 71.8092% ( 98) 00:07:49.416 14619.569 - 14720.394: 72.6234% ( 99) 00:07:49.416 14720.394 - 14821.218: 73.3553% ( 89) 00:07:49.416 14821.218 - 14922.043: 74.0214% ( 81) 00:07:49.416 14922.043 - 15022.868: 74.9342% ( 111) 00:07:49.416 15022.868 - 15123.692: 75.9128% ( 119) 00:07:49.416 15123.692 - 15224.517: 76.7599% ( 103) 00:07:49.416 15224.517 - 15325.342: 77.7467% ( 120) 00:07:49.416 15325.342 - 15426.166: 78.7829% ( 126) 00:07:49.416 15426.166 - 15526.991: 79.9260% ( 139) 00:07:49.416 15526.991 - 15627.815: 80.9293% ( 122) 00:07:49.416 15627.815 - 15728.640: 81.9737% ( 127) 00:07:49.416 15728.640 - 15829.465: 82.9194% ( 115) 00:07:49.416 15829.465 - 15930.289: 83.8158% ( 109) 00:07:49.416 15930.289 - 16031.114: 84.8191% ( 122) 00:07:49.416 16031.114 - 16131.938: 85.9293% ( 135) 00:07:49.416 16131.938 - 16232.763: 86.9572% ( 125) 00:07:49.416 16232.763 - 16333.588: 87.8947% ( 114) 00:07:49.416 16333.588 - 16434.412: 88.8240% ( 113) 00:07:49.416 16434.412 - 16535.237: 89.7368% ( 111) 00:07:49.416 16535.237 - 16636.062: 90.5674% ( 101) 00:07:49.416 16636.062 - 16736.886: 91.4967% ( 113) 00:07:49.416 16736.886 - 16837.711: 92.3438% ( 103) 00:07:49.416 16837.711 - 16938.535: 93.0757% ( 89) 00:07:49.416 16938.535 - 17039.360: 93.7911% ( 87) 00:07:49.416 17039.360 - 17140.185: 94.4161% ( 76) 00:07:49.416 17140.185 - 17241.009: 94.9342% ( 63) 00:07:49.416 17241.009 - 17341.834: 95.4112% ( 58) 00:07:49.416 17341.834 - 17442.658: 95.9046% ( 60) 00:07:49.416 17442.658 - 17543.483: 96.2911% ( 47) 00:07:49.416 17543.483 - 17644.308: 96.5954% ( 37) 00:07:49.416 17644.308 - 17745.132: 96.8421% ( 30) 00:07:49.416 17745.132 - 17845.957: 97.0806% ( 29) 00:07:49.416 17845.957 - 17946.782: 97.3109% ( 28) 00:07:49.416 17946.782 - 18047.606: 97.5987% ( 35) 00:07:49.416 18047.606 - 18148.431: 97.8618% ( 32) 00:07:49.416 18148.431 - 18249.255: 98.0674% ( 25) 00:07:49.416 18249.255 - 18350.080: 98.2401% ( 21) 00:07:49.416 18350.080 - 18450.905: 98.3635% ( 15) 00:07:49.416 18450.905 - 18551.729: 98.5115% ( 18) 00:07:49.416 18551.729 - 18652.554: 98.6513% ( 17) 00:07:49.416 18652.554 - 18753.378: 98.7336% ( 10) 00:07:49.416 18753.378 - 18854.203: 98.7747% ( 5) 00:07:49.416 18854.203 - 18955.028: 98.8158% ( 5) 00:07:49.416 18955.028 - 19055.852: 98.8487% ( 4) 00:07:49.416 19055.852 - 19156.677: 98.8898% ( 5) 00:07:49.416 19156.677 - 19257.502: 98.9309% ( 5) 00:07:49.416 19257.502 - 19358.326: 98.9474% ( 2) 00:07:49.416 26617.698 - 26819.348: 98.9638% ( 2) 00:07:49.416 26819.348 - 27020.997: 99.0214% ( 7) 00:07:49.416 27020.997 - 27222.646: 99.0872% ( 8) 00:07:49.416 27222.646 - 27424.295: 99.1447% ( 7) 00:07:49.416 27424.295 - 27625.945: 99.2105% ( 8) 00:07:49.416 27625.945 - 27827.594: 99.2681% ( 7) 00:07:49.416 27827.594 - 28029.243: 99.3339% ( 8) 00:07:49.416 28029.243 - 28230.892: 99.3914% ( 7) 00:07:49.416 28230.892 - 28432.542: 99.4572% ( 8) 00:07:49.416 28432.542 - 28634.191: 99.4737% ( 2) 00:07:49.416 33675.422 - 33877.071: 99.5312% ( 7) 00:07:49.416 33877.071 - 34078.720: 99.5888% ( 7) 00:07:49.416 34078.720 - 34280.369: 99.6546% ( 8) 00:07:49.416 34280.369 - 34482.018: 99.7122% ( 7) 00:07:49.416 34482.018 - 34683.668: 99.7697% ( 7) 00:07:49.416 34683.668 - 34885.317: 99.8355% ( 8) 00:07:49.416 34885.317 - 35086.966: 99.9013% ( 8) 00:07:49.416 35086.966 - 35288.615: 99.9589% ( 7) 00:07:49.416 35288.615 - 35490.265: 100.0000% ( 5) 00:07:49.416 00:07:49.416 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:49.416 ============================================================================== 00:07:49.416 Range in us Cumulative IO count 00:07:49.416 5671.385 - 5696.591: 0.0082% ( 1) 00:07:49.416 5696.591 - 5721.797: 0.1480% ( 17) 00:07:49.416 5721.797 - 5747.003: 0.4770% ( 40) 00:07:49.416 5747.003 - 5772.209: 0.9622% ( 59) 00:07:49.416 5772.209 - 5797.415: 1.5789% ( 75) 00:07:49.416 5797.415 - 5822.622: 2.3026% ( 88) 00:07:49.416 5822.622 - 5847.828: 3.0674% ( 93) 00:07:49.416 5847.828 - 5873.034: 3.9967% ( 113) 00:07:49.416 5873.034 - 5898.240: 4.9671% ( 118) 00:07:49.416 5898.240 - 5923.446: 5.9704% ( 122) 00:07:49.416 5923.446 - 5948.652: 6.9901% ( 124) 00:07:49.416 5948.652 - 5973.858: 7.9770% ( 120) 00:07:49.416 5973.858 - 5999.065: 9.0625% ( 132) 00:07:49.416 5999.065 - 6024.271: 10.1809% ( 136) 00:07:49.416 6024.271 - 6049.477: 11.3322% ( 140) 00:07:49.416 6049.477 - 6074.683: 12.5247% ( 145) 00:07:49.416 6074.683 - 6099.889: 13.7089% ( 144) 00:07:49.416 6099.889 - 6125.095: 14.9095% ( 146) 00:07:49.416 6125.095 - 6150.302: 16.0773% ( 142) 00:07:49.416 6150.302 - 6175.508: 17.2122% ( 138) 00:07:49.416 6175.508 - 6200.714: 18.3306% ( 136) 00:07:49.416 6200.714 - 6225.920: 19.4572% ( 137) 00:07:49.416 6225.920 - 6251.126: 20.6579% ( 146) 00:07:49.416 6251.126 - 6276.332: 21.7845% ( 137) 00:07:49.416 6276.332 - 6301.538: 22.9934% ( 147) 00:07:49.416 6301.538 - 6326.745: 24.1941% ( 146) 00:07:49.416 6326.745 - 6351.951: 25.3454% ( 140) 00:07:49.416 6351.951 - 6377.157: 26.5132% ( 142) 00:07:49.416 6377.157 - 6402.363: 27.7138% ( 146) 00:07:49.416 6402.363 - 6427.569: 28.8651% ( 140) 00:07:49.416 6427.569 - 6452.775: 30.0987% ( 150) 00:07:49.416 6452.775 - 6503.188: 32.5082% ( 293) 00:07:49.416 6503.188 - 6553.600: 34.8520% ( 285) 00:07:49.416 6553.600 - 6604.012: 36.7681% ( 233) 00:07:49.416 6604.012 - 6654.425: 38.2155% ( 176) 00:07:49.416 6654.425 - 6704.837: 39.2599% ( 127) 00:07:49.416 6704.837 - 6755.249: 39.8684% ( 74) 00:07:49.416 6755.249 - 6805.662: 40.4030% ( 65) 00:07:49.416 6805.662 - 6856.074: 40.9046% ( 61) 00:07:49.416 6856.074 - 6906.486: 41.3816% ( 58) 00:07:49.416 6906.486 - 6956.898: 41.8339% ( 55) 00:07:49.416 6956.898 - 7007.311: 42.2533% ( 51) 00:07:49.416 7007.311 - 7057.723: 42.7220% ( 57) 00:07:49.416 7057.723 - 7108.135: 43.1332% ( 50) 00:07:49.416 7108.135 - 7158.548: 43.5115% ( 46) 00:07:49.416 7158.548 - 7208.960: 43.8569% ( 42) 00:07:49.416 7208.960 - 7259.372: 44.2023% ( 42) 00:07:49.416 7259.372 - 7309.785: 44.5066% ( 37) 00:07:49.416 7309.785 - 7360.197: 44.8109% ( 37) 00:07:49.416 7360.197 - 7410.609: 45.0905% ( 34) 00:07:49.416 7410.609 - 7461.022: 45.3618% ( 33) 00:07:49.416 7461.022 - 7511.434: 45.6168% ( 31) 00:07:49.416 7511.434 - 7561.846: 45.8388% ( 27) 00:07:49.416 7561.846 - 7612.258: 46.1513% ( 38) 00:07:49.416 7612.258 - 7662.671: 46.3734% ( 27) 00:07:49.416 7662.671 - 7713.083: 46.5461% ( 21) 00:07:49.416 7713.083 - 7763.495: 46.7434% ( 24) 00:07:49.416 7763.495 - 7813.908: 46.9326% ( 23) 00:07:49.416 7813.908 - 7864.320: 47.0970% ( 20) 00:07:49.416 7864.320 - 7914.732: 47.2286% ( 16) 00:07:49.416 7914.732 - 7965.145: 47.3438% ( 14) 00:07:49.416 7965.145 - 8015.557: 47.4671% ( 15) 00:07:49.416 8015.557 - 8065.969: 47.6069% ( 17) 00:07:49.416 8065.969 - 8116.382: 47.7385% ( 16) 00:07:49.416 8116.382 - 8166.794: 47.8947% ( 19) 00:07:49.416 8166.794 - 8217.206: 48.0428% ( 18) 00:07:49.417 8217.206 - 8267.618: 48.2319% ( 23) 00:07:49.417 8267.618 - 8318.031: 48.4704% ( 29) 00:07:49.417 8318.031 - 8368.443: 48.7664% ( 36) 00:07:49.417 8368.443 - 8418.855: 49.0625% ( 36) 00:07:49.417 8418.855 - 8469.268: 49.3586% ( 36) 00:07:49.417 8469.268 - 8519.680: 49.6299% ( 33) 00:07:49.417 8519.680 - 8570.092: 49.8766% ( 30) 00:07:49.417 8570.092 - 8620.505: 50.1398% ( 32) 00:07:49.417 8620.505 - 8670.917: 50.4934% ( 43) 00:07:49.417 8670.917 - 8721.329: 50.7812% ( 35) 00:07:49.417 8721.329 - 8771.742: 51.0280% ( 30) 00:07:49.417 8771.742 - 8822.154: 51.2582% ( 28) 00:07:49.417 8822.154 - 8872.566: 51.5049% ( 30) 00:07:49.417 8872.566 - 8922.978: 51.7434% ( 29) 00:07:49.417 8922.978 - 8973.391: 51.9984% ( 31) 00:07:49.417 8973.391 - 9023.803: 52.2533% ( 31) 00:07:49.417 9023.803 - 9074.215: 52.5000% ( 30) 00:07:49.417 9074.215 - 9124.628: 52.7467% ( 30) 00:07:49.417 9124.628 - 9175.040: 52.9441% ( 24) 00:07:49.417 9175.040 - 9225.452: 53.1086% ( 20) 00:07:49.417 9225.452 - 9275.865: 53.2072% ( 12) 00:07:49.417 9275.865 - 9326.277: 53.3059% ( 12) 00:07:49.417 9326.277 - 9376.689: 53.4622% ( 19) 00:07:49.417 9376.689 - 9427.102: 53.6266% ( 20) 00:07:49.417 9427.102 - 9477.514: 53.7829% ( 19) 00:07:49.417 9477.514 - 9527.926: 53.9720% ( 23) 00:07:49.417 9527.926 - 9578.338: 54.1447% ( 21) 00:07:49.417 9578.338 - 9628.751: 54.3750% ( 28) 00:07:49.417 9628.751 - 9679.163: 54.5559% ( 22) 00:07:49.417 9679.163 - 9729.575: 54.7451% ( 23) 00:07:49.417 9729.575 - 9779.988: 54.9342% ( 23) 00:07:49.417 9779.988 - 9830.400: 55.1727% ( 29) 00:07:49.417 9830.400 - 9880.812: 55.3701% ( 24) 00:07:49.417 9880.812 - 9931.225: 55.6086% ( 29) 00:07:49.417 9931.225 - 9981.637: 55.8059% ( 24) 00:07:49.417 9981.637 - 10032.049: 55.9951% ( 23) 00:07:49.417 10032.049 - 10082.462: 56.2336% ( 29) 00:07:49.417 10082.462 - 10132.874: 56.4803% ( 30) 00:07:49.417 10132.874 - 10183.286: 56.6776% ( 24) 00:07:49.417 10183.286 - 10233.698: 56.8750% ( 24) 00:07:49.417 10233.698 - 10284.111: 57.0312% ( 19) 00:07:49.417 10284.111 - 10334.523: 57.1546% ( 15) 00:07:49.417 10334.523 - 10384.935: 57.2451% ( 11) 00:07:49.417 10384.935 - 10435.348: 57.3438% ( 12) 00:07:49.417 10435.348 - 10485.760: 57.4507% ( 13) 00:07:49.417 10485.760 - 10536.172: 57.5411% ( 11) 00:07:49.417 10536.172 - 10586.585: 57.6069% ( 8) 00:07:49.417 10586.585 - 10636.997: 57.6727% ( 8) 00:07:49.417 10636.997 - 10687.409: 57.7549% ( 10) 00:07:49.417 10687.409 - 10737.822: 57.8454% ( 11) 00:07:49.417 10737.822 - 10788.234: 57.9194% ( 9) 00:07:49.417 10788.234 - 10838.646: 57.9852% ( 8) 00:07:49.417 10838.646 - 10889.058: 58.0510% ( 8) 00:07:49.417 10889.058 - 10939.471: 58.1003% ( 6) 00:07:49.417 10939.471 - 10989.883: 58.1414% ( 5) 00:07:49.417 10989.883 - 11040.295: 58.1826% ( 5) 00:07:49.417 11040.295 - 11090.708: 58.2401% ( 7) 00:07:49.417 11090.708 - 11141.120: 58.2812% ( 5) 00:07:49.417 11141.120 - 11191.532: 58.3717% ( 11) 00:07:49.417 11191.532 - 11241.945: 58.4539% ( 10) 00:07:49.417 11241.945 - 11292.357: 58.5362% ( 10) 00:07:49.417 11292.357 - 11342.769: 58.6266% ( 11) 00:07:49.417 11342.769 - 11393.182: 58.7007% ( 9) 00:07:49.417 11393.182 - 11443.594: 58.7829% ( 10) 00:07:49.417 11443.594 - 11494.006: 58.8816% ( 12) 00:07:49.417 11494.006 - 11544.418: 58.9391% ( 7) 00:07:49.417 11544.418 - 11594.831: 59.0132% ( 9) 00:07:49.417 11594.831 - 11645.243: 59.1447% ( 16) 00:07:49.417 11645.243 - 11695.655: 59.2599% ( 14) 00:07:49.417 11695.655 - 11746.068: 59.4243% ( 20) 00:07:49.417 11746.068 - 11796.480: 59.6464% ( 27) 00:07:49.417 11796.480 - 11846.892: 59.8438% ( 24) 00:07:49.417 11846.892 - 11897.305: 60.0493% ( 25) 00:07:49.417 11897.305 - 11947.717: 60.2385% ( 23) 00:07:49.417 11947.717 - 11998.129: 60.4359% ( 24) 00:07:49.417 11998.129 - 12048.542: 60.6414% ( 25) 00:07:49.417 12048.542 - 12098.954: 60.7977% ( 19) 00:07:49.417 12098.954 - 12149.366: 60.9211% ( 15) 00:07:49.417 12149.366 - 12199.778: 61.0609% ( 17) 00:07:49.417 12199.778 - 12250.191: 61.2007% ( 17) 00:07:49.417 12250.191 - 12300.603: 61.3322% ( 16) 00:07:49.417 12300.603 - 12351.015: 61.4638% ( 16) 00:07:49.417 12351.015 - 12401.428: 61.5954% ( 16) 00:07:49.417 12401.428 - 12451.840: 61.7105% ( 14) 00:07:49.417 12451.840 - 12502.252: 61.8092% ( 12) 00:07:49.417 12502.252 - 12552.665: 61.9490% ( 17) 00:07:49.417 12552.665 - 12603.077: 62.0312% ( 10) 00:07:49.417 12603.077 - 12653.489: 62.0806% ( 6) 00:07:49.417 12653.489 - 12703.902: 62.1464% ( 8) 00:07:49.417 12703.902 - 12754.314: 62.1793% ( 4) 00:07:49.417 12754.314 - 12804.726: 62.2204% ( 5) 00:07:49.417 12804.726 - 12855.138: 62.2533% ( 4) 00:07:49.417 12855.138 - 12905.551: 62.3191% ( 8) 00:07:49.417 12905.551 - 13006.375: 62.4342% ( 14) 00:07:49.417 13006.375 - 13107.200: 62.5905% ( 19) 00:07:49.417 13107.200 - 13208.025: 62.7303% ( 17) 00:07:49.417 13208.025 - 13308.849: 62.9276% ( 24) 00:07:49.417 13308.849 - 13409.674: 63.2566% ( 40) 00:07:49.417 13409.674 - 13510.498: 63.5773% ( 39) 00:07:49.417 13510.498 - 13611.323: 64.0378% ( 56) 00:07:49.417 13611.323 - 13712.148: 64.5559% ( 63) 00:07:49.417 13712.148 - 13812.972: 65.1727% ( 75) 00:07:49.417 13812.972 - 13913.797: 65.9786% ( 98) 00:07:49.417 13913.797 - 14014.622: 66.8914% ( 111) 00:07:49.417 14014.622 - 14115.446: 67.8701% ( 119) 00:07:49.417 14115.446 - 14216.271: 68.8734% ( 122) 00:07:49.417 14216.271 - 14317.095: 69.8355% ( 117) 00:07:49.417 14317.095 - 14417.920: 70.8059% ( 118) 00:07:49.417 14417.920 - 14518.745: 71.6365% ( 101) 00:07:49.417 14518.745 - 14619.569: 72.3520% ( 87) 00:07:49.417 14619.569 - 14720.394: 73.1743% ( 100) 00:07:49.417 14720.394 - 14821.218: 73.9391% ( 93) 00:07:49.417 14821.218 - 14922.043: 74.5806% ( 78) 00:07:49.417 14922.043 - 15022.868: 75.1891% ( 74) 00:07:49.417 15022.868 - 15123.692: 75.9046% ( 87) 00:07:49.417 15123.692 - 15224.517: 76.8339% ( 113) 00:07:49.417 15224.517 - 15325.342: 77.7138% ( 107) 00:07:49.417 15325.342 - 15426.166: 78.6020% ( 108) 00:07:49.417 15426.166 - 15526.991: 79.4572% ( 104) 00:07:49.417 15526.991 - 15627.815: 80.4194% ( 117) 00:07:49.417 15627.815 - 15728.640: 81.3816% ( 117) 00:07:49.417 15728.640 - 15829.465: 82.2286% ( 103) 00:07:49.417 15829.465 - 15930.289: 83.1826% ( 116) 00:07:49.417 15930.289 - 16031.114: 84.2928% ( 135) 00:07:49.417 16031.114 - 16131.938: 85.4359% ( 139) 00:07:49.417 16131.938 - 16232.763: 86.4474% ( 123) 00:07:49.417 16232.763 - 16333.588: 87.4918% ( 127) 00:07:49.417 16333.588 - 16434.412: 88.6760% ( 144) 00:07:49.417 16434.412 - 16535.237: 89.7944% ( 136) 00:07:49.417 16535.237 - 16636.062: 90.6990% ( 110) 00:07:49.417 16636.062 - 16736.886: 91.3487% ( 79) 00:07:49.417 16736.886 - 16837.711: 92.0230% ( 82) 00:07:49.417 16837.711 - 16938.535: 92.6069% ( 71) 00:07:49.417 16938.535 - 17039.360: 93.1497% ( 66) 00:07:49.417 17039.360 - 17140.185: 93.6431% ( 60) 00:07:49.417 17140.185 - 17241.009: 94.1941% ( 67) 00:07:49.417 17241.009 - 17341.834: 94.7286% ( 65) 00:07:49.417 17341.834 - 17442.658: 95.1974% ( 57) 00:07:49.417 17442.658 - 17543.483: 95.7155% ( 63) 00:07:49.417 17543.483 - 17644.308: 96.0691% ( 43) 00:07:49.417 17644.308 - 17745.132: 96.3898% ( 39) 00:07:49.417 17745.132 - 17845.957: 96.7023% ( 38) 00:07:49.417 17845.957 - 17946.782: 96.9572% ( 31) 00:07:49.417 17946.782 - 18047.606: 97.2286% ( 33) 00:07:49.417 18047.606 - 18148.431: 97.5905% ( 44) 00:07:49.417 18148.431 - 18249.255: 97.9194% ( 40) 00:07:49.417 18249.255 - 18350.080: 98.1826% ( 32) 00:07:49.417 18350.080 - 18450.905: 98.3553% ( 21) 00:07:49.417 18450.905 - 18551.729: 98.5280% ( 21) 00:07:49.417 18551.729 - 18652.554: 98.6184% ( 11) 00:07:49.417 18652.554 - 18753.378: 98.6842% ( 8) 00:07:49.417 18753.378 - 18854.203: 98.7418% ( 7) 00:07:49.417 18854.203 - 18955.028: 98.7829% ( 5) 00:07:49.417 18955.028 - 19055.852: 98.8158% ( 4) 00:07:49.417 19055.852 - 19156.677: 98.8569% ( 5) 00:07:49.417 19156.677 - 19257.502: 98.9062% ( 6) 00:07:49.417 19257.502 - 19358.326: 98.9391% ( 4) 00:07:49.417 19358.326 - 19459.151: 98.9474% ( 1) 00:07:49.417 25004.505 - 25105.329: 98.9720% ( 3) 00:07:49.417 25105.329 - 25206.154: 99.0049% ( 4) 00:07:49.417 25206.154 - 25306.978: 99.0296% ( 3) 00:07:49.417 25306.978 - 25407.803: 99.0625% ( 4) 00:07:49.417 25407.803 - 25508.628: 99.0954% ( 4) 00:07:49.417 25508.628 - 25609.452: 99.1283% ( 4) 00:07:49.417 25609.452 - 25710.277: 99.1530% ( 3) 00:07:49.417 25710.277 - 25811.102: 99.1859% ( 4) 00:07:49.417 25811.102 - 26012.751: 99.2516% ( 8) 00:07:49.417 26012.751 - 26214.400: 99.3092% ( 7) 00:07:49.417 26214.400 - 26416.049: 99.3750% ( 8) 00:07:49.417 26416.049 - 26617.698: 99.4408% ( 8) 00:07:49.418 26617.698 - 26819.348: 99.4737% ( 4) 00:07:49.418 31860.578 - 32062.228: 99.4984% ( 3) 00:07:49.418 32062.228 - 32263.877: 99.5559% ( 7) 00:07:49.418 32263.877 - 32465.526: 99.6135% ( 7) 00:07:49.418 32465.526 - 32667.175: 99.6793% ( 8) 00:07:49.418 32667.175 - 32868.825: 99.7451% ( 8) 00:07:49.418 32868.825 - 33070.474: 99.8026% ( 7) 00:07:49.418 33070.474 - 33272.123: 99.8684% ( 8) 00:07:49.418 33272.123 - 33473.772: 99.9260% ( 7) 00:07:49.418 33473.772 - 33675.422: 99.9918% ( 8) 00:07:49.418 33675.422 - 33877.071: 100.0000% ( 1) 00:07:49.418 00:07:49.418 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:49.418 ============================================================================== 00:07:49.418 Range in us Cumulative IO count 00:07:49.418 5671.385 - 5696.591: 0.0082% ( 1) 00:07:49.418 5696.591 - 5721.797: 0.0900% ( 10) 00:07:49.418 5721.797 - 5747.003: 0.3681% ( 34) 00:07:49.418 5747.003 - 5772.209: 0.9162% ( 67) 00:07:49.418 5772.209 - 5797.415: 1.3825% ( 57) 00:07:49.418 5797.415 - 5822.622: 2.1679% ( 96) 00:07:49.418 5822.622 - 5847.828: 3.1005% ( 114) 00:07:49.418 5847.828 - 5873.034: 3.9921% ( 109) 00:07:49.418 5873.034 - 5898.240: 4.9247% ( 114) 00:07:49.418 5898.240 - 5923.446: 5.9637% ( 127) 00:07:49.418 5923.446 - 5948.652: 6.9126% ( 116) 00:07:49.418 5948.652 - 5973.858: 7.9761% ( 130) 00:07:49.418 5973.858 - 5999.065: 9.0396% ( 130) 00:07:49.418 5999.065 - 6024.271: 10.1685% ( 138) 00:07:49.418 6024.271 - 6049.477: 11.3302% ( 142) 00:07:49.418 6049.477 - 6074.683: 12.4755% ( 140) 00:07:49.418 6074.683 - 6099.889: 13.6126% ( 139) 00:07:49.418 6099.889 - 6125.095: 14.7579% ( 140) 00:07:49.418 6125.095 - 6150.302: 15.9522% ( 146) 00:07:49.418 6150.302 - 6175.508: 17.1875% ( 151) 00:07:49.418 6175.508 - 6200.714: 18.3655% ( 144) 00:07:49.418 6200.714 - 6225.920: 19.5272% ( 142) 00:07:49.418 6225.920 - 6251.126: 20.6806% ( 141) 00:07:49.418 6251.126 - 6276.332: 21.8259% ( 140) 00:07:49.418 6276.332 - 6301.538: 22.9630% ( 139) 00:07:49.418 6301.538 - 6326.745: 24.1001% ( 139) 00:07:49.418 6326.745 - 6351.951: 25.2945% ( 146) 00:07:49.418 6351.951 - 6377.157: 26.5052% ( 148) 00:07:49.418 6377.157 - 6402.363: 27.7160% ( 148) 00:07:49.418 6402.363 - 6427.569: 28.9267% ( 148) 00:07:49.418 6427.569 - 6452.775: 30.1293% ( 147) 00:07:49.418 6452.775 - 6503.188: 32.4853% ( 288) 00:07:49.418 6503.188 - 6553.600: 34.8331% ( 287) 00:07:49.418 6553.600 - 6604.012: 36.7310% ( 232) 00:07:49.418 6604.012 - 6654.425: 38.1954% ( 179) 00:07:49.418 6654.425 - 6704.837: 39.1770% ( 120) 00:07:49.418 6704.837 - 6755.249: 39.8233% ( 79) 00:07:49.418 6755.249 - 6805.662: 40.4287% ( 74) 00:07:49.418 6805.662 - 6856.074: 40.9277% ( 61) 00:07:49.418 6856.074 - 6906.486: 41.3449% ( 51) 00:07:49.418 6906.486 - 6956.898: 41.6967% ( 43) 00:07:49.418 6956.898 - 7007.311: 42.0730% ( 46) 00:07:49.418 7007.311 - 7057.723: 42.3838% ( 38) 00:07:49.418 7057.723 - 7108.135: 42.7274% ( 42) 00:07:49.418 7108.135 - 7158.548: 43.0465% ( 39) 00:07:49.418 7158.548 - 7208.960: 43.4064% ( 44) 00:07:49.418 7208.960 - 7259.372: 43.7255% ( 39) 00:07:49.418 7259.372 - 7309.785: 44.0281% ( 37) 00:07:49.418 7309.785 - 7360.197: 44.2899% ( 32) 00:07:49.418 7360.197 - 7410.609: 44.5681% ( 34) 00:07:49.418 7410.609 - 7461.022: 44.8217% ( 31) 00:07:49.418 7461.022 - 7511.434: 45.0753% ( 31) 00:07:49.418 7511.434 - 7561.846: 45.2880% ( 26) 00:07:49.418 7561.846 - 7612.258: 45.5007% ( 26) 00:07:49.418 7612.258 - 7662.671: 45.7052% ( 25) 00:07:49.418 7662.671 - 7713.083: 45.9997% ( 36) 00:07:49.418 7713.083 - 7763.495: 46.2124% ( 26) 00:07:49.418 7763.495 - 7813.908: 46.4087% ( 24) 00:07:49.418 7813.908 - 7864.320: 46.5560% ( 18) 00:07:49.418 7864.320 - 7914.732: 46.7032% ( 18) 00:07:49.418 7914.732 - 7965.145: 46.9241% ( 27) 00:07:49.418 7965.145 - 8015.557: 47.1122% ( 23) 00:07:49.418 8015.557 - 8065.969: 47.3086% ( 24) 00:07:49.418 8065.969 - 8116.382: 47.4885% ( 22) 00:07:49.418 8116.382 - 8166.794: 47.6767% ( 23) 00:07:49.418 8166.794 - 8217.206: 47.8485% ( 21) 00:07:49.418 8217.206 - 8267.618: 48.0694% ( 27) 00:07:49.418 8267.618 - 8318.031: 48.3230% ( 31) 00:07:49.418 8318.031 - 8368.443: 48.5275% ( 25) 00:07:49.418 8368.443 - 8418.855: 48.7320% ( 25) 00:07:49.418 8418.855 - 8469.268: 48.9447% ( 26) 00:07:49.418 8469.268 - 8519.680: 49.1410% ( 24) 00:07:49.418 8519.680 - 8570.092: 49.2883% ( 18) 00:07:49.418 8570.092 - 8620.505: 49.4846% ( 24) 00:07:49.418 8620.505 - 8670.917: 49.7382% ( 31) 00:07:49.418 8670.917 - 8721.329: 50.0736% ( 41) 00:07:49.418 8721.329 - 8771.742: 50.3354% ( 32) 00:07:49.418 8771.742 - 8822.154: 50.6217% ( 35) 00:07:49.418 8822.154 - 8872.566: 50.8671% ( 30) 00:07:49.418 8872.566 - 8922.978: 51.1207% ( 31) 00:07:49.418 8922.978 - 8973.391: 51.3989% ( 34) 00:07:49.418 8973.391 - 9023.803: 51.6688% ( 33) 00:07:49.418 9023.803 - 9074.215: 51.9797% ( 38) 00:07:49.418 9074.215 - 9124.628: 52.3069% ( 40) 00:07:49.418 9124.628 - 9175.040: 52.5851% ( 34) 00:07:49.418 9175.040 - 9225.452: 52.8550% ( 33) 00:07:49.418 9225.452 - 9275.865: 53.1086% ( 31) 00:07:49.418 9275.865 - 9326.277: 53.3459% ( 29) 00:07:49.418 9326.277 - 9376.689: 53.6158% ( 33) 00:07:49.418 9376.689 - 9427.102: 53.8531% ( 29) 00:07:49.418 9427.102 - 9477.514: 54.1149% ( 32) 00:07:49.418 9477.514 - 9527.926: 54.3112% ( 24) 00:07:49.418 9527.926 - 9578.338: 54.4993% ( 23) 00:07:49.418 9578.338 - 9628.751: 54.6793% ( 22) 00:07:49.418 9628.751 - 9679.163: 54.9329% ( 31) 00:07:49.418 9679.163 - 9729.575: 55.1947% ( 32) 00:07:49.418 9729.575 - 9779.988: 55.4156% ( 27) 00:07:49.418 9779.988 - 9830.400: 55.5792% ( 20) 00:07:49.418 9830.400 - 9880.812: 55.7673% ( 23) 00:07:49.418 9880.812 - 9931.225: 55.9391% ( 21) 00:07:49.418 9931.225 - 9981.637: 56.0782% ( 17) 00:07:49.418 9981.637 - 10032.049: 56.2418% ( 20) 00:07:49.418 10032.049 - 10082.462: 56.4136% ( 21) 00:07:49.418 10082.462 - 10132.874: 56.5609% ( 18) 00:07:49.418 10132.874 - 10183.286: 56.6918% ( 16) 00:07:49.418 10183.286 - 10233.698: 56.8635% ( 21) 00:07:49.418 10233.698 - 10284.111: 57.0108% ( 18) 00:07:49.418 10284.111 - 10334.523: 57.1253% ( 14) 00:07:49.418 10334.523 - 10384.935: 57.2644% ( 17) 00:07:49.418 10384.935 - 10435.348: 57.4035% ( 17) 00:07:49.418 10435.348 - 10485.760: 57.5262% ( 15) 00:07:49.418 10485.760 - 10536.172: 57.5998% ( 9) 00:07:49.418 10536.172 - 10586.585: 57.6571% ( 7) 00:07:49.418 10586.585 - 10636.997: 57.7062% ( 6) 00:07:49.418 10636.997 - 10687.409: 57.7634% ( 7) 00:07:49.418 10687.409 - 10737.822: 57.8289% ( 8) 00:07:49.418 10737.822 - 10788.234: 57.9352% ( 13) 00:07:49.418 10788.234 - 10838.646: 58.0088% ( 9) 00:07:49.418 10838.646 - 10889.058: 58.0825% ( 9) 00:07:49.418 10889.058 - 10939.471: 58.1479% ( 8) 00:07:49.418 10939.471 - 10989.883: 58.1970% ( 6) 00:07:49.418 10989.883 - 11040.295: 58.2379% ( 5) 00:07:49.418 11040.295 - 11090.708: 58.2870% ( 6) 00:07:49.418 11090.708 - 11141.120: 58.3279% ( 5) 00:07:49.418 11141.120 - 11191.532: 58.3770% ( 6) 00:07:49.418 11191.532 - 11241.945: 58.4260% ( 6) 00:07:49.418 11241.945 - 11292.357: 58.4915% ( 8) 00:07:49.418 11292.357 - 11342.769: 58.5406% ( 6) 00:07:49.418 11342.769 - 11393.182: 58.5978% ( 7) 00:07:49.418 11393.182 - 11443.594: 58.6469% ( 6) 00:07:49.418 11443.594 - 11494.006: 58.7042% ( 7) 00:07:49.418 11494.006 - 11544.418: 58.7615% ( 7) 00:07:49.418 11544.418 - 11594.831: 58.8433% ( 10) 00:07:49.418 11594.831 - 11645.243: 58.9496% ( 13) 00:07:49.418 11645.243 - 11695.655: 59.0478% ( 12) 00:07:49.418 11695.655 - 11746.068: 59.1459% ( 12) 00:07:49.418 11746.068 - 11796.480: 59.2687% ( 15) 00:07:49.419 11796.480 - 11846.892: 59.4159% ( 18) 00:07:49.419 11846.892 - 11897.305: 59.5386% ( 15) 00:07:49.419 11897.305 - 11947.717: 59.6777% ( 17) 00:07:49.419 11947.717 - 11998.129: 59.8004% ( 15) 00:07:49.419 11998.129 - 12048.542: 59.9395% ( 17) 00:07:49.419 12048.542 - 12098.954: 60.1113% ( 21) 00:07:49.419 12098.954 - 12149.366: 60.3076% ( 24) 00:07:49.419 12149.366 - 12199.778: 60.4385% ( 16) 00:07:49.419 12199.778 - 12250.191: 60.5776% ( 17) 00:07:49.419 12250.191 - 12300.603: 60.7412% ( 20) 00:07:49.419 12300.603 - 12351.015: 60.8884% ( 18) 00:07:49.419 12351.015 - 12401.428: 61.0602% ( 21) 00:07:49.419 12401.428 - 12451.840: 61.2238% ( 20) 00:07:49.419 12451.840 - 12502.252: 61.3384% ( 14) 00:07:49.419 12502.252 - 12552.665: 61.4365% ( 12) 00:07:49.419 12552.665 - 12603.077: 61.5265% ( 11) 00:07:49.419 12603.077 - 12653.489: 61.6329% ( 13) 00:07:49.419 12653.489 - 12703.902: 61.7310% ( 12) 00:07:49.419 12703.902 - 12754.314: 61.8374% ( 13) 00:07:49.419 12754.314 - 12804.726: 61.9192% ( 10) 00:07:49.419 12804.726 - 12855.138: 62.0255% ( 13) 00:07:49.419 12855.138 - 12905.551: 62.1237% ( 12) 00:07:49.419 12905.551 - 13006.375: 62.3282% ( 25) 00:07:49.419 13006.375 - 13107.200: 62.5573% ( 28) 00:07:49.419 13107.200 - 13208.025: 62.9499% ( 48) 00:07:49.419 13208.025 - 13308.849: 63.3426% ( 48) 00:07:49.419 13308.849 - 13409.674: 63.7189% ( 46) 00:07:49.419 13409.674 - 13510.498: 64.1279% ( 50) 00:07:49.419 13510.498 - 13611.323: 64.7251% ( 73) 00:07:49.419 13611.323 - 13712.148: 65.4941% ( 94) 00:07:49.419 13712.148 - 13812.972: 66.1486% ( 80) 00:07:49.419 13812.972 - 13913.797: 66.8439% ( 85) 00:07:49.419 13913.797 - 14014.622: 67.5638% ( 88) 00:07:49.419 14014.622 - 14115.446: 68.3164% ( 92) 00:07:49.419 14115.446 - 14216.271: 69.0609% ( 91) 00:07:49.419 14216.271 - 14317.095: 69.7808% ( 88) 00:07:49.419 14317.095 - 14417.920: 70.4270% ( 79) 00:07:49.419 14417.920 - 14518.745: 71.0978% ( 82) 00:07:49.419 14518.745 - 14619.569: 71.7605% ( 81) 00:07:49.419 14619.569 - 14720.394: 72.5376% ( 95) 00:07:49.419 14720.394 - 14821.218: 73.4130% ( 107) 00:07:49.419 14821.218 - 14922.043: 74.2965% ( 108) 00:07:49.419 14922.043 - 15022.868: 75.1145% ( 100) 00:07:49.419 15022.868 - 15123.692: 76.0635% ( 116) 00:07:49.419 15123.692 - 15224.517: 76.9797% ( 112) 00:07:49.419 15224.517 - 15325.342: 77.8387% ( 105) 00:07:49.419 15325.342 - 15426.166: 78.7222% ( 108) 00:07:49.419 15426.166 - 15526.991: 79.5402% ( 100) 00:07:49.419 15526.991 - 15627.815: 80.3665% ( 101) 00:07:49.419 15627.815 - 15728.640: 81.1518% ( 96) 00:07:49.419 15728.640 - 15829.465: 82.0272% ( 107) 00:07:49.419 15829.465 - 15930.289: 83.0007% ( 119) 00:07:49.419 15930.289 - 16031.114: 84.2032% ( 147) 00:07:49.419 16031.114 - 16131.938: 85.2667% ( 130) 00:07:49.419 16131.938 - 16232.763: 86.2402% ( 119) 00:07:49.419 16232.763 - 16333.588: 87.2628% ( 125) 00:07:49.419 16333.588 - 16434.412: 88.1790% ( 112) 00:07:49.419 16434.412 - 16535.237: 89.1688% ( 121) 00:07:49.419 16535.237 - 16636.062: 90.2405% ( 131) 00:07:49.419 16636.062 - 16736.886: 91.2549% ( 124) 00:07:49.419 16736.886 - 16837.711: 92.2693% ( 124) 00:07:49.419 16837.711 - 16938.535: 93.0056% ( 90) 00:07:49.419 16938.535 - 17039.360: 93.6518% ( 79) 00:07:49.419 17039.360 - 17140.185: 94.2408% ( 72) 00:07:49.419 17140.185 - 17241.009: 94.7726% ( 65) 00:07:49.419 17241.009 - 17341.834: 95.3370% ( 69) 00:07:49.419 17341.834 - 17442.658: 95.8197% ( 59) 00:07:49.419 17442.658 - 17543.483: 96.2615% ( 54) 00:07:49.419 17543.483 - 17644.308: 96.6787% ( 51) 00:07:49.419 17644.308 - 17745.132: 97.0550% ( 46) 00:07:49.419 17745.132 - 17845.957: 97.3577% ( 37) 00:07:49.419 17845.957 - 17946.782: 97.6440% ( 35) 00:07:49.419 17946.782 - 18047.606: 97.8812% ( 29) 00:07:49.419 18047.606 - 18148.431: 98.1348% ( 31) 00:07:49.419 18148.431 - 18249.255: 98.3802% ( 30) 00:07:49.419 18249.255 - 18350.080: 98.6011% ( 27) 00:07:49.419 18350.080 - 18450.905: 98.7402% ( 17) 00:07:49.419 18450.905 - 18551.729: 98.8384% ( 12) 00:07:49.419 18551.729 - 18652.554: 98.9611% ( 15) 00:07:49.419 18652.554 - 18753.378: 99.0838% ( 15) 00:07:49.419 18753.378 - 18854.203: 99.2147% ( 16) 00:07:49.419 18854.203 - 18955.028: 99.2883% ( 9) 00:07:49.419 18955.028 - 19055.852: 99.3292% ( 5) 00:07:49.419 19055.852 - 19156.677: 99.3701% ( 5) 00:07:49.419 19156.677 - 19257.502: 99.4028% ( 4) 00:07:49.419 19257.502 - 19358.326: 99.4437% ( 5) 00:07:49.419 19358.326 - 19459.151: 99.4764% ( 4) 00:07:49.419 24500.382 - 24601.206: 99.4846% ( 1) 00:07:49.419 24601.206 - 24702.031: 99.5173% ( 4) 00:07:49.419 24702.031 - 24802.855: 99.5419% ( 3) 00:07:49.419 24802.855 - 24903.680: 99.5746% ( 4) 00:07:49.419 24903.680 - 25004.505: 99.6073% ( 4) 00:07:49.419 25004.505 - 25105.329: 99.6401% ( 4) 00:07:49.419 25105.329 - 25206.154: 99.6728% ( 4) 00:07:49.419 25206.154 - 25306.978: 99.6973% ( 3) 00:07:49.419 25306.978 - 25407.803: 99.7300% ( 4) 00:07:49.419 25407.803 - 25508.628: 99.7628% ( 4) 00:07:49.419 25508.628 - 25609.452: 99.7955% ( 4) 00:07:49.419 25609.452 - 25710.277: 99.8282% ( 4) 00:07:49.419 25710.277 - 25811.102: 99.8527% ( 3) 00:07:49.419 25811.102 - 26012.751: 99.9182% ( 8) 00:07:49.419 26012.751 - 26214.400: 99.9836% ( 8) 00:07:49.419 26214.400 - 26416.049: 100.0000% ( 2) 00:07:49.419 00:07:49.419 09:36:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:50.800 Initializing NVMe Controllers 00:07:50.800 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.800 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.800 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.800 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.800 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:50.800 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:50.800 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:50.800 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:50.800 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:50.800 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:50.800 Initialization complete. Launching workers. 00:07:50.800 ======================================================== 00:07:50.800 Latency(us) 00:07:50.800 Device Information : IOPS MiB/s Average min max 00:07:50.800 PCIE (0000:00:13.0) NSID 1 from core 0: 12325.18 144.44 10402.43 5543.64 35386.25 00:07:50.800 PCIE (0000:00:10.0) NSID 1 from core 0: 12325.18 144.44 10386.57 5597.34 34013.00 00:07:50.800 PCIE (0000:00:11.0) NSID 1 from core 0: 12325.18 144.44 10369.65 5571.41 32641.11 00:07:50.800 PCIE (0000:00:12.0) NSID 1 from core 0: 12325.18 144.44 10353.58 5584.30 32121.29 00:07:50.800 PCIE (0000:00:12.0) NSID 2 from core 0: 12325.18 144.44 10337.76 5469.28 31224.92 00:07:50.800 PCIE (0000:00:12.0) NSID 3 from core 0: 12389.04 145.18 10268.65 5601.55 23231.67 00:07:50.800 ======================================================== 00:07:50.800 Total : 74014.94 867.36 10353.03 5469.28 35386.25 00:07:50.800 00:07:50.800 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.800 ================================================================================= 00:07:50.800 1.00000% : 5923.446us 00:07:50.800 10.00000% : 7158.548us 00:07:50.800 25.00000% : 9275.865us 00:07:50.800 50.00000% : 10183.286us 00:07:50.800 75.00000% : 11141.120us 00:07:50.800 90.00000% : 13409.674us 00:07:50.800 95.00000% : 14619.569us 00:07:50.800 98.00000% : 15325.342us 00:07:50.800 99.00000% : 27625.945us 00:07:50.801 99.50000% : 33272.123us 00:07:50.801 99.90000% : 35086.966us 00:07:50.801 99.99000% : 35490.265us 00:07:50.801 99.99900% : 35490.265us 00:07:50.801 99.99990% : 35490.265us 00:07:50.801 99.99999% : 35490.265us 00:07:50.801 00:07:50.801 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.801 ================================================================================= 00:07:50.801 1.00000% : 5873.034us 00:07:50.801 10.00000% : 7158.548us 00:07:50.801 25.00000% : 9275.865us 00:07:50.801 50.00000% : 10183.286us 00:07:50.801 75.00000% : 11241.945us 00:07:50.801 90.00000% : 13510.498us 00:07:50.801 95.00000% : 14619.569us 00:07:50.801 98.00000% : 15224.517us 00:07:50.801 99.00000% : 25407.803us 00:07:50.801 99.50000% : 32465.526us 00:07:50.801 99.90000% : 33877.071us 00:07:50.801 99.99000% : 34078.720us 00:07:50.801 99.99900% : 34078.720us 00:07:50.801 99.99990% : 34078.720us 00:07:50.801 99.99999% : 34078.720us 00:07:50.801 00:07:50.801 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.801 ================================================================================= 00:07:50.801 1.00000% : 5948.652us 00:07:50.801 10.00000% : 7108.135us 00:07:50.801 25.00000% : 9376.689us 00:07:50.801 50.00000% : 10183.286us 00:07:50.801 75.00000% : 11141.120us 00:07:50.801 90.00000% : 13611.323us 00:07:50.801 95.00000% : 14619.569us 00:07:50.801 98.00000% : 15325.342us 00:07:50.801 99.00000% : 23693.785us 00:07:50.801 99.50000% : 31053.982us 00:07:50.801 99.90000% : 32465.526us 00:07:50.801 99.99000% : 32667.175us 00:07:50.801 99.99900% : 32667.175us 00:07:50.801 99.99990% : 32667.175us 00:07:50.801 99.99999% : 32667.175us 00:07:50.801 00:07:50.801 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.801 ================================================================================= 00:07:50.801 1.00000% : 5948.652us 00:07:50.801 10.00000% : 7108.135us 00:07:50.801 25.00000% : 9275.865us 00:07:50.801 50.00000% : 10233.698us 00:07:50.801 75.00000% : 10989.883us 00:07:50.801 90.00000% : 13611.323us 00:07:50.801 95.00000% : 14619.569us 00:07:50.801 98.00000% : 15426.166us 00:07:50.801 99.00000% : 23088.837us 00:07:50.801 99.50000% : 30650.683us 00:07:50.801 99.90000% : 31860.578us 00:07:50.801 99.99000% : 32263.877us 00:07:50.801 99.99900% : 32263.877us 00:07:50.801 99.99990% : 32263.877us 00:07:50.801 99.99999% : 32263.877us 00:07:50.801 00:07:50.801 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.801 ================================================================================= 00:07:50.801 1.00000% : 5923.446us 00:07:50.801 10.00000% : 7158.548us 00:07:50.801 25.00000% : 9275.865us 00:07:50.801 50.00000% : 10183.286us 00:07:50.801 75.00000% : 11040.295us 00:07:50.801 90.00000% : 13510.498us 00:07:50.801 95.00000% : 14720.394us 00:07:50.801 98.00000% : 15325.342us 00:07:50.801 99.00000% : 21878.942us 00:07:50.801 99.50000% : 29642.437us 00:07:50.801 99.90000% : 31053.982us 00:07:50.801 99.99000% : 31255.631us 00:07:50.801 99.99900% : 31255.631us 00:07:50.801 99.99990% : 31255.631us 00:07:50.801 99.99999% : 31255.631us 00:07:50.801 00:07:50.801 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.801 ================================================================================= 00:07:50.801 1.00000% : 5923.446us 00:07:50.801 10.00000% : 7208.960us 00:07:50.801 25.00000% : 9326.277us 00:07:50.801 50.00000% : 10132.874us 00:07:50.801 75.00000% : 10989.883us 00:07:50.801 90.00000% : 13510.498us 00:07:50.801 95.00000% : 14821.218us 00:07:50.801 98.00000% : 15325.342us 00:07:50.801 99.00000% : 15728.640us 00:07:50.801 99.50000% : 21778.117us 00:07:50.801 99.90000% : 22988.012us 00:07:50.801 99.99000% : 23290.486us 00:07:50.801 99.99900% : 23290.486us 00:07:50.801 99.99990% : 23290.486us 00:07:50.801 99.99999% : 23290.486us 00:07:50.801 00:07:50.801 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.801 ============================================================================== 00:07:50.801 Range in us Cumulative IO count 00:07:50.801 5520.148 - 5545.354: 0.0081% ( 1) 00:07:50.801 5721.797 - 5747.003: 0.0243% ( 2) 00:07:50.801 5747.003 - 5772.209: 0.0567% ( 4) 00:07:50.801 5772.209 - 5797.415: 0.1295% ( 9) 00:07:50.801 5797.415 - 5822.622: 0.2105% ( 10) 00:07:50.801 5822.622 - 5847.828: 0.2995% ( 11) 00:07:50.801 5847.828 - 5873.034: 0.4534% ( 19) 00:07:50.801 5873.034 - 5898.240: 0.8339% ( 47) 00:07:50.801 5898.240 - 5923.446: 1.0606% ( 28) 00:07:50.801 5923.446 - 5948.652: 1.5382% ( 59) 00:07:50.801 5948.652 - 5973.858: 1.7811% ( 30) 00:07:50.801 5973.858 - 5999.065: 2.0725% ( 36) 00:07:50.801 5999.065 - 6024.271: 2.6312% ( 69) 00:07:50.801 6024.271 - 6049.477: 3.4165% ( 97) 00:07:50.801 6049.477 - 6074.683: 3.7241% ( 38) 00:07:50.801 6074.683 - 6099.889: 3.8617% ( 17) 00:07:50.801 6099.889 - 6125.095: 4.0236% ( 20) 00:07:50.801 6125.095 - 6150.302: 4.6470% ( 77) 00:07:50.801 6150.302 - 6175.508: 4.8008% ( 19) 00:07:50.801 6175.508 - 6200.714: 4.9223% ( 15) 00:07:50.801 6200.714 - 6225.920: 4.9951% ( 9) 00:07:50.801 6225.920 - 6251.126: 5.0599% ( 8) 00:07:50.801 6251.126 - 6276.332: 5.1085% ( 6) 00:07:50.801 6276.332 - 6301.538: 5.1733% ( 8) 00:07:50.801 6301.538 - 6326.745: 5.2218% ( 6) 00:07:50.801 6326.745 - 6351.951: 5.2866% ( 8) 00:07:50.801 6351.951 - 6377.157: 5.3756% ( 11) 00:07:50.801 6377.157 - 6402.363: 5.5133% ( 17) 00:07:50.801 6402.363 - 6427.569: 5.6833% ( 21) 00:07:50.801 6427.569 - 6452.775: 6.0638% ( 47) 00:07:50.801 6452.775 - 6503.188: 6.8248% ( 94) 00:07:50.801 6503.188 - 6553.600: 7.4077% ( 72) 00:07:50.801 6553.600 - 6604.012: 7.8125% ( 50) 00:07:50.801 6604.012 - 6654.425: 8.0392% ( 28) 00:07:50.801 6654.425 - 6704.837: 8.1606% ( 15) 00:07:50.801 6704.837 - 6755.249: 8.2578% ( 12) 00:07:50.801 6755.249 - 6805.662: 8.3468% ( 11) 00:07:50.801 6805.662 - 6856.074: 8.5330% ( 23) 00:07:50.801 6856.074 - 6906.486: 8.6869% ( 19) 00:07:50.801 6906.486 - 6956.898: 8.8245% ( 17) 00:07:50.801 6956.898 - 7007.311: 9.0188% ( 24) 00:07:50.801 7007.311 - 7057.723: 9.1888% ( 21) 00:07:50.801 7057.723 - 7108.135: 9.7312% ( 67) 00:07:50.801 7108.135 - 7158.548: 10.0874% ( 44) 00:07:50.801 7158.548 - 7208.960: 10.6299% ( 67) 00:07:50.801 7208.960 - 7259.372: 10.9132% ( 35) 00:07:50.801 7259.372 - 7309.785: 11.0266% ( 14) 00:07:50.801 7309.785 - 7360.197: 11.1156% ( 11) 00:07:50.801 7360.197 - 7410.609: 11.2209% ( 13) 00:07:50.801 7410.609 - 7461.022: 11.3504% ( 16) 00:07:50.801 7461.022 - 7511.434: 11.5528% ( 25) 00:07:50.801 7511.434 - 7561.846: 11.6985% ( 18) 00:07:50.801 7561.846 - 7612.258: 11.7795% ( 10) 00:07:50.801 7612.258 - 7662.671: 12.2652% ( 60) 00:07:50.801 7662.671 - 7713.083: 12.4595% ( 24) 00:07:50.801 7713.083 - 7763.495: 12.6781% ( 27) 00:07:50.801 7763.495 - 7813.908: 12.9210% ( 30) 00:07:50.801 7813.908 - 7864.320: 13.1720% ( 31) 00:07:50.801 7864.320 - 7914.732: 13.6658% ( 61) 00:07:50.801 7914.732 - 7965.145: 14.1111% ( 55) 00:07:50.801 7965.145 - 8015.557: 14.4187% ( 38) 00:07:50.801 8015.557 - 8065.969: 14.8397% ( 52) 00:07:50.801 8065.969 - 8116.382: 15.0097% ( 21) 00:07:50.801 8116.382 - 8166.794: 15.2121% ( 25) 00:07:50.801 8166.794 - 8217.206: 15.4469% ( 29) 00:07:50.801 8217.206 - 8267.618: 15.8031% ( 44) 00:07:50.801 8267.618 - 8318.031: 15.9569% ( 19) 00:07:50.801 8318.031 - 8368.443: 16.0865% ( 16) 00:07:50.801 8368.443 - 8418.855: 16.2484% ( 20) 00:07:50.801 8418.855 - 8469.268: 16.5398% ( 36) 00:07:50.801 8469.268 - 8519.680: 17.0823% ( 67) 00:07:50.801 8519.680 - 8570.092: 17.7704% ( 85) 00:07:50.801 8570.092 - 8620.505: 18.3290% ( 69) 00:07:50.801 8620.505 - 8670.917: 18.7338% ( 50) 00:07:50.801 8670.917 - 8721.329: 19.4543% ( 89) 00:07:50.801 8721.329 - 8771.742: 20.3287% ( 108) 00:07:50.801 8771.742 - 8822.154: 20.7983% ( 58) 00:07:50.801 8822.154 - 8872.566: 21.1869% ( 48) 00:07:50.801 8872.566 - 8922.978: 21.7455% ( 69) 00:07:50.801 8922.978 - 8973.391: 22.1422% ( 49) 00:07:50.801 8973.391 - 9023.803: 22.4741% ( 41) 00:07:50.801 9023.803 - 9074.215: 22.9841% ( 63) 00:07:50.801 9074.215 - 9124.628: 23.6318% ( 80) 00:07:50.801 9124.628 - 9175.040: 24.0528% ( 52) 00:07:50.801 9175.040 - 9225.452: 24.5062% ( 56) 00:07:50.801 9225.452 - 9275.865: 25.0405% ( 66) 00:07:50.801 9275.865 - 9326.277: 25.9634% ( 114) 00:07:50.801 9326.277 - 9376.689: 26.9349% ( 120) 00:07:50.801 9376.689 - 9427.102: 27.9793% ( 129) 00:07:50.801 9427.102 - 9477.514: 29.4446% ( 181) 00:07:50.801 9477.514 - 9527.926: 30.7238% ( 158) 00:07:50.801 9527.926 - 9578.338: 32.1082% ( 171) 00:07:50.801 9578.338 - 9628.751: 33.6788% ( 194) 00:07:50.801 9628.751 - 9679.163: 35.1927% ( 187) 00:07:50.801 9679.163 - 9729.575: 36.7957% ( 198) 00:07:50.801 9729.575 - 9779.988: 38.3744% ( 195) 00:07:50.801 9779.988 - 9830.400: 40.0097% ( 202) 00:07:50.801 9830.400 - 9880.812: 41.5317% ( 188) 00:07:50.801 9880.812 - 9931.225: 42.9890% ( 180) 00:07:50.801 9931.225 - 9981.637: 44.5353% ( 191) 00:07:50.801 9981.637 - 10032.049: 46.0411% ( 186) 00:07:50.801 10032.049 - 10082.462: 47.5955% ( 192) 00:07:50.801 10082.462 - 10132.874: 48.9718% ( 170) 00:07:50.801 10132.874 - 10183.286: 50.5343% ( 193) 00:07:50.801 10183.286 - 10233.698: 51.9430% ( 174) 00:07:50.801 10233.698 - 10284.111: 53.3679% ( 176) 00:07:50.801 10284.111 - 10334.523: 54.7442% ( 170) 00:07:50.801 10334.523 - 10384.935: 56.3229% ( 195) 00:07:50.801 10384.935 - 10435.348: 57.8854% ( 193) 00:07:50.802 10435.348 - 10485.760: 59.2859% ( 173) 00:07:50.802 10485.760 - 10536.172: 61.1075% ( 225) 00:07:50.802 10536.172 - 10586.585: 62.9858% ( 232) 00:07:50.802 10586.585 - 10636.997: 64.8235% ( 227) 00:07:50.802 10636.997 - 10687.409: 66.3779% ( 192) 00:07:50.802 10687.409 - 10737.822: 67.8756% ( 185) 00:07:50.802 10737.822 - 10788.234: 68.8714% ( 123) 00:07:50.802 10788.234 - 10838.646: 69.7134% ( 104) 00:07:50.802 10838.646 - 10889.058: 70.7011% ( 122) 00:07:50.802 10889.058 - 10939.471: 71.7859% ( 134) 00:07:50.802 10939.471 - 10989.883: 72.6441% ( 106) 00:07:50.802 10989.883 - 11040.295: 73.4051% ( 94) 00:07:50.802 11040.295 - 11090.708: 74.3604% ( 118) 00:07:50.802 11090.708 - 11141.120: 75.4129% ( 130) 00:07:50.802 11141.120 - 11191.532: 76.2387% ( 102) 00:07:50.802 11191.532 - 11241.945: 77.1535% ( 113) 00:07:50.802 11241.945 - 11292.357: 78.1574% ( 124) 00:07:50.802 11292.357 - 11342.769: 78.7160% ( 69) 00:07:50.802 11342.769 - 11393.182: 79.1937% ( 59) 00:07:50.802 11393.182 - 11443.594: 79.5499% ( 44) 00:07:50.802 11443.594 - 11494.006: 79.9385% ( 48) 00:07:50.802 11494.006 - 11544.418: 80.2461% ( 38) 00:07:50.802 11544.418 - 11594.831: 80.5699% ( 40) 00:07:50.802 11594.831 - 11645.243: 80.8371% ( 33) 00:07:50.802 11645.243 - 11695.655: 81.2662% ( 53) 00:07:50.802 11695.655 - 11746.068: 81.4848% ( 27) 00:07:50.802 11746.068 - 11796.480: 81.7034% ( 27) 00:07:50.802 11796.480 - 11846.892: 81.9624% ( 32) 00:07:50.802 11846.892 - 11897.305: 82.1891% ( 28) 00:07:50.802 11897.305 - 11947.717: 82.5858% ( 49) 00:07:50.802 11947.717 - 11998.129: 82.8773% ( 36) 00:07:50.802 11998.129 - 12048.542: 83.1525% ( 34) 00:07:50.802 12048.542 - 12098.954: 83.5816% ( 53) 00:07:50.802 12098.954 - 12149.366: 84.0026% ( 52) 00:07:50.802 12149.366 - 12199.778: 84.5126% ( 63) 00:07:50.802 12199.778 - 12250.191: 84.8608% ( 43) 00:07:50.802 12250.191 - 12300.603: 85.1441% ( 35) 00:07:50.802 12300.603 - 12351.015: 85.4356% ( 36) 00:07:50.802 12351.015 - 12401.428: 85.7108% ( 34) 00:07:50.802 12401.428 - 12451.840: 86.0589% ( 43) 00:07:50.802 12451.840 - 12502.252: 86.3099% ( 31) 00:07:50.802 12502.252 - 12552.665: 86.5204% ( 26) 00:07:50.802 12552.665 - 12603.077: 86.7228% ( 25) 00:07:50.802 12603.077 - 12653.489: 86.9252% ( 25) 00:07:50.802 12653.489 - 12703.902: 87.1357% ( 26) 00:07:50.802 12703.902 - 12754.314: 87.3462% ( 26) 00:07:50.802 12754.314 - 12804.726: 87.5972% ( 31) 00:07:50.802 12804.726 - 12855.138: 87.7429% ( 18) 00:07:50.802 12855.138 - 12905.551: 87.8562% ( 14) 00:07:50.802 12905.551 - 13006.375: 88.2043% ( 43) 00:07:50.802 13006.375 - 13107.200: 88.6253% ( 52) 00:07:50.802 13107.200 - 13208.025: 88.9411% ( 39) 00:07:50.802 13208.025 - 13308.849: 89.5240% ( 72) 00:07:50.802 13308.849 - 13409.674: 90.1554% ( 78) 00:07:50.802 13409.674 - 13510.498: 90.4874% ( 41) 00:07:50.802 13510.498 - 13611.323: 90.8679% ( 47) 00:07:50.802 13611.323 - 13712.148: 91.1431% ( 34) 00:07:50.802 13712.148 - 13812.972: 91.4184% ( 34) 00:07:50.802 13812.972 - 13913.797: 91.6694% ( 31) 00:07:50.802 13913.797 - 14014.622: 91.9689% ( 37) 00:07:50.802 14014.622 - 14115.446: 92.3332% ( 45) 00:07:50.802 14115.446 - 14216.271: 92.8271% ( 61) 00:07:50.802 14216.271 - 14317.095: 93.3128% ( 60) 00:07:50.802 14317.095 - 14417.920: 93.9119% ( 74) 00:07:50.802 14417.920 - 14518.745: 94.5110% ( 74) 00:07:50.802 14518.745 - 14619.569: 95.0777% ( 70) 00:07:50.802 14619.569 - 14720.394: 95.5473% ( 58) 00:07:50.802 14720.394 - 14821.218: 96.1221% ( 71) 00:07:50.802 14821.218 - 14922.043: 96.6159% ( 61) 00:07:50.802 14922.043 - 15022.868: 97.0855% ( 58) 00:07:50.802 15022.868 - 15123.692: 97.3931% ( 38) 00:07:50.802 15123.692 - 15224.517: 97.8708% ( 59) 00:07:50.802 15224.517 - 15325.342: 98.1460% ( 34) 00:07:50.802 15325.342 - 15426.166: 98.4375% ( 36) 00:07:50.802 15426.166 - 15526.991: 98.6885% ( 31) 00:07:50.802 15526.991 - 15627.815: 98.8828% ( 24) 00:07:50.802 15627.815 - 15728.640: 98.9313% ( 6) 00:07:50.802 15728.640 - 15829.465: 98.9637% ( 4) 00:07:50.802 27222.646 - 27424.295: 98.9961% ( 4) 00:07:50.802 27424.295 - 27625.945: 99.1499% ( 19) 00:07:50.802 27625.945 - 27827.594: 99.2390% ( 11) 00:07:50.802 27827.594 - 28029.243: 99.2876% ( 6) 00:07:50.802 28029.243 - 28230.892: 99.3442% ( 7) 00:07:50.802 28230.892 - 28432.542: 99.4090% ( 8) 00:07:50.802 28432.542 - 28634.191: 99.4576% ( 6) 00:07:50.802 28634.191 - 28835.840: 99.4819% ( 3) 00:07:50.802 33070.474 - 33272.123: 99.5628% ( 10) 00:07:50.802 33272.123 - 33473.772: 99.6276% ( 8) 00:07:50.802 34078.720 - 34280.369: 99.6600% ( 4) 00:07:50.802 34280.369 - 34482.018: 99.7166% ( 7) 00:07:50.802 34482.018 - 34683.668: 99.7814% ( 8) 00:07:50.802 34683.668 - 34885.317: 99.8462% ( 8) 00:07:50.802 34885.317 - 35086.966: 99.9028% ( 7) 00:07:50.802 35086.966 - 35288.615: 99.9676% ( 8) 00:07:50.802 35288.615 - 35490.265: 100.0000% ( 4) 00:07:50.802 00:07:50.802 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.802 ============================================================================== 00:07:50.802 Range in us Cumulative IO count 00:07:50.802 5595.766 - 5620.972: 0.0081% ( 1) 00:07:50.802 5620.972 - 5646.178: 0.0891% ( 10) 00:07:50.802 5646.178 - 5671.385: 0.1781% ( 11) 00:07:50.802 5671.385 - 5696.591: 0.2348% ( 7) 00:07:50.802 5696.591 - 5721.797: 0.2834% ( 6) 00:07:50.802 5721.797 - 5747.003: 0.3805% ( 12) 00:07:50.802 5747.003 - 5772.209: 0.5262% ( 18) 00:07:50.802 5772.209 - 5797.415: 0.6558% ( 16) 00:07:50.802 5797.415 - 5822.622: 0.7286% ( 9) 00:07:50.802 5822.622 - 5847.828: 0.8420% ( 14) 00:07:50.802 5847.828 - 5873.034: 1.1820% ( 42) 00:07:50.802 5873.034 - 5898.240: 1.5220% ( 42) 00:07:50.802 5898.240 - 5923.446: 1.7649% ( 30) 00:07:50.802 5923.446 - 5948.652: 1.9997% ( 29) 00:07:50.802 5948.652 - 5973.858: 2.3073% ( 38) 00:07:50.802 5973.858 - 5999.065: 2.7445% ( 54) 00:07:50.802 5999.065 - 6024.271: 3.1574% ( 51) 00:07:50.802 6024.271 - 6049.477: 3.4165% ( 32) 00:07:50.802 6049.477 - 6074.683: 3.6269% ( 26) 00:07:50.802 6074.683 - 6099.889: 3.8698% ( 30) 00:07:50.802 6099.889 - 6125.095: 4.1532% ( 35) 00:07:50.802 6125.095 - 6150.302: 4.2989% ( 18) 00:07:50.802 6150.302 - 6175.508: 4.4527% ( 19) 00:07:50.802 6175.508 - 6200.714: 4.5823% ( 16) 00:07:50.802 6200.714 - 6225.920: 4.6956% ( 14) 00:07:50.802 6225.920 - 6251.126: 4.8089% ( 14) 00:07:50.802 6251.126 - 6276.332: 4.9709% ( 20) 00:07:50.802 6276.332 - 6301.538: 5.0518% ( 10) 00:07:50.802 6301.538 - 6326.745: 5.1733% ( 15) 00:07:50.802 6326.745 - 6351.951: 5.2947% ( 15) 00:07:50.802 6351.951 - 6377.157: 5.4323% ( 17) 00:07:50.802 6377.157 - 6402.363: 5.5780% ( 18) 00:07:50.802 6402.363 - 6427.569: 5.7642% ( 23) 00:07:50.802 6427.569 - 6452.775: 5.9990% ( 29) 00:07:50.802 6452.775 - 6503.188: 6.5253% ( 65) 00:07:50.802 6503.188 - 6553.600: 6.8572% ( 41) 00:07:50.802 6553.600 - 6604.012: 7.1324% ( 34) 00:07:50.802 6604.012 - 6654.425: 7.3915% ( 32) 00:07:50.802 6654.425 - 6704.837: 7.7882% ( 49) 00:07:50.802 6704.837 - 6755.249: 8.1444% ( 44) 00:07:50.802 6755.249 - 6805.662: 8.3144% ( 21) 00:07:50.802 6805.662 - 6856.074: 8.4359% ( 15) 00:07:50.802 6856.074 - 6906.486: 8.6383% ( 25) 00:07:50.802 6906.486 - 6956.898: 8.8892% ( 31) 00:07:50.802 6956.898 - 7007.311: 9.2455% ( 44) 00:07:50.802 7007.311 - 7057.723: 9.6179% ( 46) 00:07:50.802 7057.723 - 7108.135: 9.8931% ( 34) 00:07:50.802 7108.135 - 7158.548: 10.2332% ( 42) 00:07:50.802 7158.548 - 7208.960: 10.4437% ( 26) 00:07:50.802 7208.960 - 7259.372: 10.6380% ( 24) 00:07:50.802 7259.372 - 7309.785: 10.7918% ( 19) 00:07:50.802 7309.785 - 7360.197: 10.9780% ( 23) 00:07:50.802 7360.197 - 7410.609: 11.1885% ( 26) 00:07:50.802 7410.609 - 7461.022: 11.3342% ( 18) 00:07:50.802 7461.022 - 7511.434: 11.5771% ( 30) 00:07:50.802 7511.434 - 7561.846: 11.8847% ( 38) 00:07:50.802 7561.846 - 7612.258: 12.1762% ( 36) 00:07:50.802 7612.258 - 7662.671: 12.5972% ( 52) 00:07:50.802 7662.671 - 7713.083: 12.8967% ( 37) 00:07:50.802 7713.083 - 7763.495: 13.1881% ( 36) 00:07:50.802 7763.495 - 7813.908: 13.4715% ( 35) 00:07:50.802 7813.908 - 7864.320: 13.9573% ( 60) 00:07:50.802 7864.320 - 7914.732: 14.1354% ( 22) 00:07:50.802 7914.732 - 7965.145: 14.2487% ( 14) 00:07:50.802 7965.145 - 8015.557: 14.4106% ( 20) 00:07:50.802 8015.557 - 8065.969: 14.6535% ( 30) 00:07:50.802 8065.969 - 8116.382: 15.1635% ( 63) 00:07:50.802 8116.382 - 8166.794: 15.4712% ( 38) 00:07:50.802 8166.794 - 8217.206: 15.7788% ( 38) 00:07:50.802 8217.206 - 8267.618: 16.1027% ( 40) 00:07:50.802 8267.618 - 8318.031: 16.6289% ( 65) 00:07:50.802 8318.031 - 8368.443: 17.1146% ( 60) 00:07:50.802 8368.443 - 8418.855: 17.5275% ( 51) 00:07:50.802 8418.855 - 8469.268: 17.8109% ( 35) 00:07:50.802 8469.268 - 8519.680: 18.1590% ( 43) 00:07:50.802 8519.680 - 8570.092: 18.5557% ( 49) 00:07:50.802 8570.092 - 8620.505: 19.0010% ( 55) 00:07:50.802 8620.505 - 8670.917: 19.3572% ( 44) 00:07:50.802 8670.917 - 8721.329: 19.6648% ( 38) 00:07:50.802 8721.329 - 8771.742: 20.0210% ( 44) 00:07:50.802 8771.742 - 8822.154: 20.3611% ( 42) 00:07:50.802 8822.154 - 8872.566: 20.7092% ( 43) 00:07:50.802 8872.566 - 8922.978: 21.1302% ( 52) 00:07:50.802 8922.978 - 8973.391: 21.5512% ( 52) 00:07:50.802 8973.391 - 9023.803: 22.0288% ( 59) 00:07:50.802 9023.803 - 9074.215: 22.6684% ( 79) 00:07:50.802 9074.215 - 9124.628: 23.2108% ( 67) 00:07:50.802 9124.628 - 9175.040: 23.6318% ( 52) 00:07:50.802 9175.040 - 9225.452: 24.2633% ( 78) 00:07:50.802 9225.452 - 9275.865: 25.0648% ( 99) 00:07:50.802 9275.865 - 9326.277: 26.1658% ( 136) 00:07:50.803 9326.277 - 9376.689: 27.2506% ( 134) 00:07:50.803 9376.689 - 9427.102: 28.4084% ( 143) 00:07:50.803 9427.102 - 9477.514: 29.7199% ( 162) 00:07:50.803 9477.514 - 9527.926: 30.7804% ( 131) 00:07:50.803 9527.926 - 9578.338: 31.9705% ( 147) 00:07:50.803 9578.338 - 9628.751: 33.5330% ( 193) 00:07:50.803 9628.751 - 9679.163: 34.9660% ( 177) 00:07:50.803 9679.163 - 9729.575: 36.3342% ( 169) 00:07:50.803 9729.575 - 9779.988: 38.0181% ( 208) 00:07:50.803 9779.988 - 9830.400: 39.5078% ( 184) 00:07:50.803 9830.400 - 9880.812: 41.0379% ( 189) 00:07:50.803 9880.812 - 9931.225: 42.8514% ( 224) 00:07:50.803 9931.225 - 9981.637: 44.3572% ( 186) 00:07:50.803 9981.637 - 10032.049: 46.0492% ( 209) 00:07:50.803 10032.049 - 10082.462: 47.6684% ( 200) 00:07:50.803 10082.462 - 10132.874: 49.4738% ( 223) 00:07:50.803 10132.874 - 10183.286: 51.0120% ( 190) 00:07:50.803 10183.286 - 10233.698: 52.6878% ( 207) 00:07:50.803 10233.698 - 10284.111: 54.0803% ( 172) 00:07:50.803 10284.111 - 10334.523: 55.3352% ( 155) 00:07:50.803 10334.523 - 10384.935: 56.8410% ( 186) 00:07:50.803 10384.935 - 10435.348: 58.2092% ( 169) 00:07:50.803 10435.348 - 10485.760: 59.3507% ( 141) 00:07:50.803 10485.760 - 10536.172: 60.4841% ( 140) 00:07:50.803 10536.172 - 10586.585: 61.8280% ( 166) 00:07:50.803 10586.585 - 10636.997: 63.4553% ( 201) 00:07:50.803 10636.997 - 10687.409: 64.9692% ( 187) 00:07:50.803 10687.409 - 10737.822: 66.3212% ( 167) 00:07:50.803 10737.822 - 10788.234: 67.4709% ( 142) 00:07:50.803 10788.234 - 10838.646: 68.6124% ( 141) 00:07:50.803 10838.646 - 10889.058: 69.5839% ( 120) 00:07:50.803 10889.058 - 10939.471: 70.4501% ( 107) 00:07:50.803 10939.471 - 10989.883: 71.4216% ( 120) 00:07:50.803 10989.883 - 11040.295: 72.3446% ( 114) 00:07:50.803 11040.295 - 11090.708: 73.0327% ( 85) 00:07:50.803 11090.708 - 11141.120: 73.8828% ( 105) 00:07:50.803 11141.120 - 11191.532: 74.8219% ( 116) 00:07:50.803 11191.532 - 11241.945: 75.9229% ( 136) 00:07:50.803 11241.945 - 11292.357: 76.7487% ( 102) 00:07:50.803 11292.357 - 11342.769: 77.5745% ( 102) 00:07:50.803 11342.769 - 11393.182: 78.2302% ( 81) 00:07:50.803 11393.182 - 11443.594: 78.7727% ( 67) 00:07:50.803 11443.594 - 11494.006: 79.3151% ( 67) 00:07:50.803 11494.006 - 11544.418: 80.1490% ( 103) 00:07:50.803 11544.418 - 11594.831: 80.8533% ( 87) 00:07:50.803 11594.831 - 11645.243: 81.4767% ( 77) 00:07:50.803 11645.243 - 11695.655: 81.8734% ( 49) 00:07:50.803 11695.655 - 11746.068: 82.4158% ( 67) 00:07:50.803 11746.068 - 11796.480: 82.8449% ( 53) 00:07:50.803 11796.480 - 11846.892: 83.2497% ( 50) 00:07:50.803 11846.892 - 11897.305: 83.4926% ( 30) 00:07:50.803 11897.305 - 11947.717: 83.7030% ( 26) 00:07:50.803 11947.717 - 11998.129: 83.9702% ( 33) 00:07:50.803 11998.129 - 12048.542: 84.2455% ( 34) 00:07:50.803 12048.542 - 12098.954: 84.4236% ( 22) 00:07:50.803 12098.954 - 12149.366: 84.6745% ( 31) 00:07:50.803 12149.366 - 12199.778: 84.8850% ( 26) 00:07:50.803 12199.778 - 12250.191: 85.0874% ( 25) 00:07:50.803 12250.191 - 12300.603: 85.2979% ( 26) 00:07:50.803 12300.603 - 12351.015: 85.4275% ( 16) 00:07:50.803 12351.015 - 12401.428: 85.5975% ( 21) 00:07:50.803 12401.428 - 12451.840: 85.7756% ( 22) 00:07:50.803 12451.840 - 12502.252: 85.9861% ( 26) 00:07:50.803 12502.252 - 12552.665: 86.2532% ( 33) 00:07:50.803 12552.665 - 12603.077: 86.4556% ( 25) 00:07:50.803 12603.077 - 12653.489: 86.6095% ( 19) 00:07:50.803 12653.489 - 12703.902: 86.7633% ( 19) 00:07:50.803 12703.902 - 12754.314: 86.8604% ( 12) 00:07:50.803 12754.314 - 12804.726: 87.0466% ( 23) 00:07:50.803 12804.726 - 12855.138: 87.1924% ( 18) 00:07:50.803 12855.138 - 12905.551: 87.4352% ( 30) 00:07:50.803 12905.551 - 13006.375: 87.7510% ( 39) 00:07:50.803 13006.375 - 13107.200: 88.1881% ( 54) 00:07:50.803 13107.200 - 13208.025: 88.6496% ( 57) 00:07:50.803 13208.025 - 13308.849: 89.3782% ( 90) 00:07:50.803 13308.849 - 13409.674: 89.9854% ( 75) 00:07:50.803 13409.674 - 13510.498: 90.4874% ( 62) 00:07:50.803 13510.498 - 13611.323: 90.8517% ( 45) 00:07:50.803 13611.323 - 13712.148: 91.2160% ( 45) 00:07:50.803 13712.148 - 13812.972: 91.6532% ( 54) 00:07:50.803 13812.972 - 13913.797: 92.0823% ( 53) 00:07:50.803 13913.797 - 14014.622: 92.5923% ( 63) 00:07:50.803 14014.622 - 14115.446: 92.9890% ( 49) 00:07:50.803 14115.446 - 14216.271: 93.3776% ( 48) 00:07:50.803 14216.271 - 14317.095: 93.6124% ( 29) 00:07:50.803 14317.095 - 14417.920: 93.8876% ( 34) 00:07:50.803 14417.920 - 14518.745: 94.4462% ( 69) 00:07:50.803 14518.745 - 14619.569: 95.0210% ( 71) 00:07:50.803 14619.569 - 14720.394: 95.6363% ( 76) 00:07:50.803 14720.394 - 14821.218: 96.3892% ( 93) 00:07:50.803 14821.218 - 14922.043: 96.9802% ( 73) 00:07:50.803 14922.043 - 15022.868: 97.3769% ( 49) 00:07:50.803 15022.868 - 15123.692: 97.7008% ( 40) 00:07:50.803 15123.692 - 15224.517: 98.0165% ( 39) 00:07:50.803 15224.517 - 15325.342: 98.2351% ( 27) 00:07:50.803 15325.342 - 15426.166: 98.5023% ( 33) 00:07:50.803 15426.166 - 15526.991: 98.6642% ( 20) 00:07:50.803 15526.991 - 15627.815: 98.8909% ( 28) 00:07:50.803 15627.815 - 15728.640: 98.9475% ( 7) 00:07:50.803 15728.640 - 15829.465: 98.9637% ( 2) 00:07:50.803 25105.329 - 25206.154: 98.9718% ( 1) 00:07:50.803 25206.154 - 25306.978: 98.9961% ( 3) 00:07:50.803 25306.978 - 25407.803: 99.0447% ( 6) 00:07:50.803 25407.803 - 25508.628: 99.0528% ( 1) 00:07:50.803 25508.628 - 25609.452: 99.0852% ( 4) 00:07:50.803 25609.452 - 25710.277: 99.1095% ( 3) 00:07:50.803 25710.277 - 25811.102: 99.1418% ( 4) 00:07:50.803 25811.102 - 26012.751: 99.1985% ( 7) 00:07:50.803 26012.751 - 26214.400: 99.2471% ( 6) 00:07:50.803 26214.400 - 26416.049: 99.2957% ( 6) 00:07:50.803 26416.049 - 26617.698: 99.3523% ( 7) 00:07:50.803 26617.698 - 26819.348: 99.4090% ( 7) 00:07:50.803 26819.348 - 27020.997: 99.4738% ( 8) 00:07:50.803 27020.997 - 27222.646: 99.4819% ( 1) 00:07:50.803 32062.228 - 32263.877: 99.4981% ( 2) 00:07:50.803 32263.877 - 32465.526: 99.5547% ( 7) 00:07:50.803 32465.526 - 32667.175: 99.6114% ( 7) 00:07:50.803 32667.175 - 32868.825: 99.6600% ( 6) 00:07:50.803 32868.825 - 33070.474: 99.7166% ( 7) 00:07:50.803 33070.474 - 33272.123: 99.7733% ( 7) 00:07:50.803 33272.123 - 33473.772: 99.8300% ( 7) 00:07:50.803 33473.772 - 33675.422: 99.8948% ( 8) 00:07:50.803 33675.422 - 33877.071: 99.9514% ( 7) 00:07:50.803 33877.071 - 34078.720: 100.0000% ( 6) 00:07:50.803 00:07:50.803 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.803 ============================================================================== 00:07:50.803 Range in us Cumulative IO count 00:07:50.803 5570.560 - 5595.766: 0.0081% ( 1) 00:07:50.803 5595.766 - 5620.972: 0.0162% ( 1) 00:07:50.803 5646.178 - 5671.385: 0.0243% ( 1) 00:07:50.803 5696.591 - 5721.797: 0.0324% ( 1) 00:07:50.803 5847.828 - 5873.034: 0.3886% ( 44) 00:07:50.803 5873.034 - 5898.240: 0.6153% ( 28) 00:07:50.803 5898.240 - 5923.446: 0.8420% ( 28) 00:07:50.803 5923.446 - 5948.652: 1.6111% ( 95) 00:07:50.803 5948.652 - 5973.858: 2.1697% ( 69) 00:07:50.803 5973.858 - 5999.065: 2.3640% ( 24) 00:07:50.803 5999.065 - 6024.271: 2.7688% ( 50) 00:07:50.803 6024.271 - 6049.477: 3.5784% ( 100) 00:07:50.803 6049.477 - 6074.683: 4.1694% ( 73) 00:07:50.803 6074.683 - 6099.889: 4.7037% ( 66) 00:07:50.803 6099.889 - 6125.095: 4.8089% ( 13) 00:07:50.803 6125.095 - 6150.302: 4.8899% ( 10) 00:07:50.803 6150.302 - 6175.508: 4.9385% ( 6) 00:07:50.803 6175.508 - 6200.714: 4.9709% ( 4) 00:07:50.803 6200.714 - 6225.920: 5.0194% ( 6) 00:07:50.803 6225.920 - 6251.126: 5.0599% ( 5) 00:07:50.803 6251.126 - 6276.332: 5.1166% ( 7) 00:07:50.803 6276.332 - 6301.538: 5.1652% ( 6) 00:07:50.803 6301.538 - 6326.745: 5.2218% ( 7) 00:07:50.803 6326.745 - 6351.951: 5.2866% ( 8) 00:07:50.803 6351.951 - 6377.157: 5.3433% ( 7) 00:07:50.803 6377.157 - 6402.363: 5.4080% ( 8) 00:07:50.803 6402.363 - 6427.569: 5.6509% ( 30) 00:07:50.803 6427.569 - 6452.775: 5.7562% ( 13) 00:07:50.803 6452.775 - 6503.188: 6.1771% ( 52) 00:07:50.803 6503.188 - 6553.600: 6.9301% ( 93) 00:07:50.803 6553.600 - 6604.012: 7.5372% ( 75) 00:07:50.803 6604.012 - 6654.425: 7.7153% ( 22) 00:07:50.803 6654.425 - 6704.837: 7.8530% ( 17) 00:07:50.803 6704.837 - 6755.249: 8.2902% ( 54) 00:07:50.803 6755.249 - 6805.662: 8.6545% ( 45) 00:07:50.803 6805.662 - 6856.074: 8.7759% ( 15) 00:07:50.803 6856.074 - 6906.486: 8.9297% ( 19) 00:07:50.803 6906.486 - 6956.898: 9.2778% ( 43) 00:07:50.803 6956.898 - 7007.311: 9.4236% ( 18) 00:07:50.803 7007.311 - 7057.723: 9.6988% ( 34) 00:07:50.803 7057.723 - 7108.135: 10.0874% ( 48) 00:07:50.803 7108.135 - 7158.548: 10.6056% ( 64) 00:07:50.803 7158.548 - 7208.960: 10.7999% ( 24) 00:07:50.803 7208.960 - 7259.372: 11.0751% ( 34) 00:07:50.803 7259.372 - 7309.785: 11.2613% ( 23) 00:07:50.803 7309.785 - 7360.197: 11.3504% ( 11) 00:07:50.803 7360.197 - 7410.609: 11.4556% ( 13) 00:07:50.803 7410.609 - 7461.022: 11.8928% ( 54) 00:07:50.803 7461.022 - 7511.434: 11.9981% ( 13) 00:07:50.803 7511.434 - 7561.846: 12.0790% ( 10) 00:07:50.803 7561.846 - 7612.258: 12.2005% ( 15) 00:07:50.803 7612.258 - 7662.671: 12.6943% ( 61) 00:07:50.803 7662.671 - 7713.083: 13.2529% ( 69) 00:07:50.803 7713.083 - 7763.495: 13.4958% ( 30) 00:07:50.803 7763.495 - 7813.908: 13.7225% ( 28) 00:07:50.803 7813.908 - 7864.320: 14.0382% ( 39) 00:07:50.803 7864.320 - 7914.732: 14.1920% ( 19) 00:07:50.803 7914.732 - 7965.145: 14.3378% ( 18) 00:07:50.803 7965.145 - 8015.557: 14.5563% ( 27) 00:07:50.803 8015.557 - 8065.969: 14.9611% ( 50) 00:07:50.803 8065.969 - 8116.382: 15.1231% ( 20) 00:07:50.803 8116.382 - 8166.794: 15.3497% ( 28) 00:07:50.803 8166.794 - 8217.206: 15.8436% ( 61) 00:07:50.804 8217.206 - 8267.618: 15.9326% ( 11) 00:07:50.804 8267.618 - 8318.031: 16.0784% ( 18) 00:07:50.804 8318.031 - 8368.443: 16.3131% ( 29) 00:07:50.804 8368.443 - 8418.855: 17.0580% ( 92) 00:07:50.804 8418.855 - 8469.268: 17.6004% ( 67) 00:07:50.804 8469.268 - 8519.680: 17.8352% ( 29) 00:07:50.804 8519.680 - 8570.092: 18.1428% ( 38) 00:07:50.804 8570.092 - 8620.505: 18.6933% ( 68) 00:07:50.804 8620.505 - 8670.917: 18.9524% ( 32) 00:07:50.804 8670.917 - 8721.329: 19.1953% ( 30) 00:07:50.804 8721.329 - 8771.742: 19.5029% ( 38) 00:07:50.804 8771.742 - 8822.154: 19.7134% ( 26) 00:07:50.804 8822.154 - 8872.566: 19.9806% ( 33) 00:07:50.804 8872.566 - 8922.978: 20.3206% ( 42) 00:07:50.804 8922.978 - 8973.391: 20.8549% ( 66) 00:07:50.804 8973.391 - 9023.803: 21.4216% ( 70) 00:07:50.804 9023.803 - 9074.215: 21.9317% ( 63) 00:07:50.804 9074.215 - 9124.628: 22.6603% ( 90) 00:07:50.804 9124.628 - 9175.040: 23.2027% ( 67) 00:07:50.804 9175.040 - 9225.452: 23.6399% ( 54) 00:07:50.804 9225.452 - 9275.865: 24.2147% ( 71) 00:07:50.804 9275.865 - 9326.277: 24.9676% ( 93) 00:07:50.804 9326.277 - 9376.689: 26.0444% ( 133) 00:07:50.804 9376.689 - 9427.102: 26.9592% ( 113) 00:07:50.804 9427.102 - 9477.514: 27.9793% ( 126) 00:07:50.804 9477.514 - 9527.926: 29.3637% ( 171) 00:07:50.804 9527.926 - 9578.338: 31.1367% ( 219) 00:07:50.804 9578.338 - 9628.751: 32.8125% ( 207) 00:07:50.804 9628.751 - 9679.163: 34.2212% ( 174) 00:07:50.804 9679.163 - 9729.575: 35.7756% ( 192) 00:07:50.804 9729.575 - 9779.988: 37.7510% ( 244) 00:07:50.804 9779.988 - 9830.400: 39.4997% ( 216) 00:07:50.804 9830.400 - 9880.812: 41.2484% ( 216) 00:07:50.804 9880.812 - 9931.225: 42.8837% ( 202) 00:07:50.804 9931.225 - 9981.637: 44.3896% ( 186) 00:07:50.804 9981.637 - 10032.049: 45.8306% ( 178) 00:07:50.804 10032.049 - 10082.462: 47.3850% ( 192) 00:07:50.804 10082.462 - 10132.874: 49.1095% ( 213) 00:07:50.804 10132.874 - 10183.286: 50.7043% ( 197) 00:07:50.804 10183.286 - 10233.698: 52.4935% ( 221) 00:07:50.804 10233.698 - 10284.111: 53.9589% ( 181) 00:07:50.804 10284.111 - 10334.523: 55.5780% ( 200) 00:07:50.804 10334.523 - 10384.935: 57.1810% ( 198) 00:07:50.804 10384.935 - 10435.348: 58.8326% ( 204) 00:07:50.804 10435.348 - 10485.760: 60.4356% ( 198) 00:07:50.804 10485.760 - 10536.172: 62.0304% ( 197) 00:07:50.804 10536.172 - 10586.585: 63.5120% ( 183) 00:07:50.804 10586.585 - 10636.997: 65.1797% ( 206) 00:07:50.804 10636.997 - 10687.409: 66.8718% ( 209) 00:07:50.804 10687.409 - 10737.822: 68.1266% ( 155) 00:07:50.804 10737.822 - 10788.234: 69.2034% ( 133) 00:07:50.804 10788.234 - 10838.646: 70.3611% ( 143) 00:07:50.804 10838.646 - 10889.058: 71.6159% ( 155) 00:07:50.804 10889.058 - 10939.471: 72.7817% ( 144) 00:07:50.804 10939.471 - 10989.883: 73.5670% ( 97) 00:07:50.804 10989.883 - 11040.295: 74.2714% ( 87) 00:07:50.804 11040.295 - 11090.708: 74.8219% ( 68) 00:07:50.804 11090.708 - 11141.120: 75.5829% ( 94) 00:07:50.804 11141.120 - 11191.532: 76.2387% ( 81) 00:07:50.804 11191.532 - 11241.945: 76.7973% ( 69) 00:07:50.804 11241.945 - 11292.357: 77.2911% ( 61) 00:07:50.804 11292.357 - 11342.769: 77.6392% ( 43) 00:07:50.804 11342.769 - 11393.182: 78.0117% ( 46) 00:07:50.804 11393.182 - 11443.594: 78.3841% ( 46) 00:07:50.804 11443.594 - 11494.006: 78.7079% ( 40) 00:07:50.804 11494.006 - 11544.418: 79.1532% ( 55) 00:07:50.804 11544.418 - 11594.831: 79.5742% ( 52) 00:07:50.804 11594.831 - 11645.243: 80.0113% ( 54) 00:07:50.804 11645.243 - 11695.655: 80.6590% ( 80) 00:07:50.804 11695.655 - 11746.068: 81.3633% ( 87) 00:07:50.804 11746.068 - 11796.480: 81.8410% ( 59) 00:07:50.804 11796.480 - 11846.892: 82.2782% ( 54) 00:07:50.804 11846.892 - 11897.305: 82.7558% ( 59) 00:07:50.804 11897.305 - 11947.717: 83.1282% ( 46) 00:07:50.804 11947.717 - 11998.129: 83.4440% ( 39) 00:07:50.804 11998.129 - 12048.542: 83.7678% ( 40) 00:07:50.804 12048.542 - 12098.954: 84.0916% ( 40) 00:07:50.804 12098.954 - 12149.366: 84.3669% ( 34) 00:07:50.804 12149.366 - 12199.778: 84.7798% ( 51) 00:07:50.804 12199.778 - 12250.191: 85.1441% ( 45) 00:07:50.804 12250.191 - 12300.603: 85.4113% ( 33) 00:07:50.804 12300.603 - 12351.015: 85.6056% ( 24) 00:07:50.804 12351.015 - 12401.428: 85.9213% ( 39) 00:07:50.804 12401.428 - 12451.840: 86.2047% ( 35) 00:07:50.804 12451.840 - 12502.252: 86.4475% ( 30) 00:07:50.804 12502.252 - 12552.665: 86.6985% ( 31) 00:07:50.804 12552.665 - 12603.077: 86.8685% ( 21) 00:07:50.804 12603.077 - 12653.489: 87.0385% ( 21) 00:07:50.804 12653.489 - 12703.902: 87.2166% ( 22) 00:07:50.804 12703.902 - 12754.314: 87.2652% ( 6) 00:07:50.804 12754.314 - 12804.726: 87.3381% ( 9) 00:07:50.804 12804.726 - 12855.138: 87.4433% ( 13) 00:07:50.804 12855.138 - 12905.551: 87.6133% ( 21) 00:07:50.804 12905.551 - 13006.375: 88.0019% ( 48) 00:07:50.804 13006.375 - 13107.200: 88.4310% ( 53) 00:07:50.804 13107.200 - 13208.025: 88.6820% ( 31) 00:07:50.804 13208.025 - 13308.849: 88.9734% ( 36) 00:07:50.804 13308.849 - 13409.674: 89.3620% ( 48) 00:07:50.804 13409.674 - 13510.498: 89.8478% ( 60) 00:07:50.804 13510.498 - 13611.323: 90.4874% ( 79) 00:07:50.804 13611.323 - 13712.148: 91.1674% ( 84) 00:07:50.804 13712.148 - 13812.972: 91.7341% ( 70) 00:07:50.804 13812.972 - 13913.797: 92.1875% ( 56) 00:07:50.804 13913.797 - 14014.622: 92.5923% ( 50) 00:07:50.804 14014.622 - 14115.446: 93.0699% ( 59) 00:07:50.804 14115.446 - 14216.271: 93.5881% ( 64) 00:07:50.804 14216.271 - 14317.095: 94.1548% ( 70) 00:07:50.804 14317.095 - 14417.920: 94.5029% ( 43) 00:07:50.804 14417.920 - 14518.745: 94.9563% ( 56) 00:07:50.804 14518.745 - 14619.569: 95.3530% ( 49) 00:07:50.804 14619.569 - 14720.394: 95.8387% ( 60) 00:07:50.804 14720.394 - 14821.218: 96.1869% ( 43) 00:07:50.804 14821.218 - 14922.043: 96.5755% ( 48) 00:07:50.804 14922.043 - 15022.868: 97.0774% ( 62) 00:07:50.804 15022.868 - 15123.692: 97.5308% ( 56) 00:07:50.804 15123.692 - 15224.517: 97.8060% ( 34) 00:07:50.804 15224.517 - 15325.342: 98.0327% ( 28) 00:07:50.804 15325.342 - 15426.166: 98.3161% ( 35) 00:07:50.804 15426.166 - 15526.991: 98.4051% ( 11) 00:07:50.804 15526.991 - 15627.815: 98.5104% ( 13) 00:07:50.804 15627.815 - 15728.640: 98.6156% ( 13) 00:07:50.804 15728.640 - 15829.465: 98.8342% ( 27) 00:07:50.804 15829.465 - 15930.289: 98.9152% ( 10) 00:07:50.804 15930.289 - 16031.114: 98.9637% ( 6) 00:07:50.804 23492.135 - 23592.960: 98.9880% ( 3) 00:07:50.804 23592.960 - 23693.785: 99.0204% ( 4) 00:07:50.804 23693.785 - 23794.609: 99.0528% ( 4) 00:07:50.804 23794.609 - 23895.434: 99.0852% ( 4) 00:07:50.804 23895.434 - 23996.258: 99.1176% ( 4) 00:07:50.804 23996.258 - 24097.083: 99.1580% ( 5) 00:07:50.804 24097.083 - 24197.908: 99.1904% ( 4) 00:07:50.804 24197.908 - 24298.732: 99.2228% ( 4) 00:07:50.804 24298.732 - 24399.557: 99.2552% ( 4) 00:07:50.804 24399.557 - 24500.382: 99.2876% ( 4) 00:07:50.804 24500.382 - 24601.206: 99.3119% ( 3) 00:07:50.804 24601.206 - 24702.031: 99.3442% ( 4) 00:07:50.804 24702.031 - 24802.855: 99.3766% ( 4) 00:07:50.804 24802.855 - 24903.680: 99.4090% ( 4) 00:07:50.804 24903.680 - 25004.505: 99.4414% ( 4) 00:07:50.804 25004.505 - 25105.329: 99.4819% ( 5) 00:07:50.804 30852.332 - 31053.982: 99.5062% ( 3) 00:07:50.804 31053.982 - 31255.631: 99.5709% ( 8) 00:07:50.804 31255.631 - 31457.280: 99.6357% ( 8) 00:07:50.804 31457.280 - 31658.929: 99.6924% ( 7) 00:07:50.804 31658.929 - 31860.578: 99.7490% ( 7) 00:07:50.804 31860.578 - 32062.228: 99.8138% ( 8) 00:07:50.804 32062.228 - 32263.877: 99.8786% ( 8) 00:07:50.804 32263.877 - 32465.526: 99.9352% ( 7) 00:07:50.804 32465.526 - 32667.175: 100.0000% ( 8) 00:07:50.804 00:07:50.804 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.804 ============================================================================== 00:07:50.804 Range in us Cumulative IO count 00:07:50.804 5570.560 - 5595.766: 0.0081% ( 1) 00:07:50.804 5620.972 - 5646.178: 0.0162% ( 1) 00:07:50.804 5646.178 - 5671.385: 0.0243% ( 1) 00:07:50.804 5671.385 - 5696.591: 0.0324% ( 1) 00:07:50.804 5696.591 - 5721.797: 0.0567% ( 3) 00:07:50.804 5721.797 - 5747.003: 0.0810% ( 3) 00:07:50.804 5747.003 - 5772.209: 0.1133% ( 4) 00:07:50.804 5772.209 - 5797.415: 0.1700% ( 7) 00:07:50.804 5797.415 - 5822.622: 0.2429% ( 9) 00:07:50.804 5822.622 - 5847.828: 0.3805% ( 17) 00:07:50.804 5847.828 - 5873.034: 0.6558% ( 34) 00:07:50.804 5873.034 - 5898.240: 0.8015% ( 18) 00:07:50.804 5898.240 - 5923.446: 0.9391% ( 17) 00:07:50.804 5923.446 - 5948.652: 1.3196% ( 47) 00:07:50.804 5948.652 - 5973.858: 1.7730% ( 56) 00:07:50.804 5973.858 - 5999.065: 2.2021% ( 53) 00:07:50.804 5999.065 - 6024.271: 2.8093% ( 75) 00:07:50.804 6024.271 - 6049.477: 3.0198% ( 26) 00:07:50.804 6049.477 - 6074.683: 3.4326% ( 51) 00:07:50.804 6074.683 - 6099.889: 4.0155% ( 72) 00:07:50.804 6099.889 - 6125.095: 4.3394% ( 40) 00:07:50.804 6125.095 - 6150.302: 4.4608% ( 15) 00:07:50.804 6150.302 - 6175.508: 4.5661% ( 13) 00:07:50.804 6175.508 - 6200.714: 4.7361% ( 21) 00:07:50.804 6200.714 - 6225.920: 4.9304% ( 24) 00:07:50.804 6225.920 - 6251.126: 5.0761% ( 18) 00:07:50.804 6251.126 - 6276.332: 5.1409% ( 8) 00:07:50.804 6276.332 - 6301.538: 5.2137% ( 9) 00:07:50.804 6301.538 - 6326.745: 5.2785% ( 8) 00:07:50.804 6326.745 - 6351.951: 5.3352% ( 7) 00:07:50.804 6351.951 - 6377.157: 5.6428% ( 38) 00:07:50.804 6377.157 - 6402.363: 5.7400% ( 12) 00:07:50.804 6402.363 - 6427.569: 5.8371% ( 12) 00:07:50.804 6427.569 - 6452.775: 5.9585% ( 15) 00:07:50.804 6452.775 - 6503.188: 6.2257% ( 33) 00:07:50.804 6503.188 - 6553.600: 6.8815% ( 81) 00:07:50.805 6553.600 - 6604.012: 7.4158% ( 66) 00:07:50.805 6604.012 - 6654.425: 7.5534% ( 17) 00:07:50.805 6654.425 - 6704.837: 7.7153% ( 20) 00:07:50.805 6704.837 - 6755.249: 7.8773% ( 20) 00:07:50.805 6755.249 - 6805.662: 8.1040% ( 28) 00:07:50.805 6805.662 - 6856.074: 8.5978% ( 61) 00:07:50.805 6856.074 - 6906.486: 8.8731% ( 34) 00:07:50.805 6906.486 - 6956.898: 9.1240% ( 31) 00:07:50.805 6956.898 - 7007.311: 9.5612% ( 54) 00:07:50.805 7007.311 - 7057.723: 9.7474% ( 23) 00:07:50.805 7057.723 - 7108.135: 10.1360% ( 48) 00:07:50.805 7108.135 - 7158.548: 10.3951% ( 32) 00:07:50.805 7158.548 - 7208.960: 10.6622% ( 33) 00:07:50.805 7208.960 - 7259.372: 11.0670% ( 50) 00:07:50.805 7259.372 - 7309.785: 11.2532% ( 23) 00:07:50.805 7309.785 - 7360.197: 11.4556% ( 25) 00:07:50.805 7360.197 - 7410.609: 11.6661% ( 26) 00:07:50.805 7410.609 - 7461.022: 12.0385% ( 46) 00:07:50.805 7461.022 - 7511.434: 12.2409% ( 25) 00:07:50.805 7511.434 - 7561.846: 12.5567% ( 39) 00:07:50.805 7561.846 - 7612.258: 12.7834% ( 28) 00:07:50.805 7612.258 - 7662.671: 12.9696% ( 23) 00:07:50.805 7662.671 - 7713.083: 13.2448% ( 34) 00:07:50.805 7713.083 - 7763.495: 13.4634% ( 27) 00:07:50.805 7763.495 - 7813.908: 13.7144% ( 31) 00:07:50.805 7813.908 - 7864.320: 13.8277% ( 14) 00:07:50.805 7864.320 - 7914.732: 13.9734% ( 18) 00:07:50.805 7914.732 - 7965.145: 14.1192% ( 18) 00:07:50.805 7965.145 - 8015.557: 14.2244% ( 13) 00:07:50.805 8015.557 - 8065.969: 14.2892% ( 8) 00:07:50.805 8065.969 - 8116.382: 14.3701% ( 10) 00:07:50.805 8116.382 - 8166.794: 14.5563% ( 23) 00:07:50.805 8166.794 - 8217.206: 14.8154% ( 32) 00:07:50.805 8217.206 - 8267.618: 15.1554% ( 42) 00:07:50.805 8267.618 - 8318.031: 15.5521% ( 49) 00:07:50.805 8318.031 - 8368.443: 16.1512% ( 74) 00:07:50.805 8368.443 - 8418.855: 16.6451% ( 61) 00:07:50.805 8418.855 - 8469.268: 16.9446% ( 37) 00:07:50.805 8469.268 - 8519.680: 17.1956% ( 31) 00:07:50.805 8519.680 - 8570.092: 17.5113% ( 39) 00:07:50.805 8570.092 - 8620.505: 17.7380% ( 28) 00:07:50.805 8620.505 - 8670.917: 18.2319% ( 61) 00:07:50.805 8670.917 - 8721.329: 18.4262% ( 24) 00:07:50.805 8721.329 - 8771.742: 18.6043% ( 22) 00:07:50.805 8771.742 - 8822.154: 18.9038% ( 37) 00:07:50.805 8822.154 - 8872.566: 19.1791% ( 34) 00:07:50.805 8872.566 - 8922.978: 19.5758% ( 49) 00:07:50.805 8922.978 - 8973.391: 20.2720% ( 86) 00:07:50.805 8973.391 - 9023.803: 21.0330% ( 94) 00:07:50.805 9023.803 - 9074.215: 21.9236% ( 110) 00:07:50.805 9074.215 - 9124.628: 22.8222% ( 111) 00:07:50.805 9124.628 - 9175.040: 23.8261% ( 124) 00:07:50.805 9175.040 - 9225.452: 24.8948% ( 132) 00:07:50.805 9225.452 - 9275.865: 25.8663% ( 120) 00:07:50.805 9275.865 - 9326.277: 27.1859% ( 163) 00:07:50.805 9326.277 - 9376.689: 28.2302% ( 129) 00:07:50.805 9376.689 - 9427.102: 29.3313% ( 136) 00:07:50.805 9427.102 - 9477.514: 30.5538% ( 151) 00:07:50.805 9477.514 - 9527.926: 31.8005% ( 154) 00:07:50.805 9527.926 - 9578.338: 33.1606% ( 168) 00:07:50.805 9578.338 - 9628.751: 34.2940% ( 140) 00:07:50.805 9628.751 - 9679.163: 35.3465% ( 130) 00:07:50.805 9679.163 - 9729.575: 36.6904% ( 166) 00:07:50.805 9729.575 - 9779.988: 38.0262% ( 165) 00:07:50.805 9779.988 - 9830.400: 39.2406% ( 150) 00:07:50.805 9830.400 - 9880.812: 40.3740% ( 140) 00:07:50.805 9880.812 - 9931.225: 41.6694% ( 160) 00:07:50.805 9931.225 - 9981.637: 42.9323% ( 156) 00:07:50.805 9981.637 - 10032.049: 44.1062% ( 145) 00:07:50.805 10032.049 - 10082.462: 45.9197% ( 224) 00:07:50.805 10082.462 - 10132.874: 47.8465% ( 238) 00:07:50.805 10132.874 - 10183.286: 49.4171% ( 194) 00:07:50.805 10183.286 - 10233.698: 51.5220% ( 260) 00:07:50.805 10233.698 - 10284.111: 54.2179% ( 333) 00:07:50.805 10284.111 - 10334.523: 56.2500% ( 251) 00:07:50.805 10334.523 - 10384.935: 58.6140% ( 292) 00:07:50.805 10384.935 - 10435.348: 60.5003% ( 233) 00:07:50.805 10435.348 - 10485.760: 62.1033% ( 198) 00:07:50.805 10485.760 - 10536.172: 63.4067% ( 161) 00:07:50.805 10536.172 - 10586.585: 64.9530% ( 191) 00:07:50.805 10586.585 - 10636.997: 66.2808% ( 164) 00:07:50.805 10636.997 - 10687.409: 67.5437% ( 156) 00:07:50.805 10687.409 - 10737.822: 69.2196% ( 207) 00:07:50.805 10737.822 - 10788.234: 70.5959% ( 170) 00:07:50.805 10788.234 - 10838.646: 72.2879% ( 209) 00:07:50.805 10838.646 - 10889.058: 73.7451% ( 180) 00:07:50.805 10889.058 - 10939.471: 74.9595% ( 150) 00:07:50.805 10939.471 - 10989.883: 76.0525% ( 135) 00:07:50.805 10989.883 - 11040.295: 76.7001% ( 80) 00:07:50.805 11040.295 - 11090.708: 77.1778% ( 59) 00:07:50.805 11090.708 - 11141.120: 77.6473% ( 58) 00:07:50.805 11141.120 - 11191.532: 78.0359% ( 48) 00:07:50.805 11191.532 - 11241.945: 78.4488% ( 51) 00:07:50.805 11241.945 - 11292.357: 78.7079% ( 32) 00:07:50.805 11292.357 - 11342.769: 78.9022% ( 24) 00:07:50.805 11342.769 - 11393.182: 79.0317% ( 16) 00:07:50.805 11393.182 - 11443.594: 79.2341% ( 25) 00:07:50.805 11443.594 - 11494.006: 79.5580% ( 40) 00:07:50.805 11494.006 - 11544.418: 79.9790% ( 52) 00:07:50.805 11544.418 - 11594.831: 80.3756% ( 49) 00:07:50.805 11594.831 - 11645.243: 80.9181% ( 67) 00:07:50.805 11645.243 - 11695.655: 81.4281% ( 63) 00:07:50.805 11695.655 - 11746.068: 81.7519% ( 40) 00:07:50.805 11746.068 - 11796.480: 82.1082% ( 44) 00:07:50.805 11796.480 - 11846.892: 82.4482% ( 42) 00:07:50.805 11846.892 - 11897.305: 83.0311% ( 72) 00:07:50.805 11897.305 - 11947.717: 83.4845% ( 56) 00:07:50.805 11947.717 - 11998.129: 83.7516% ( 33) 00:07:50.805 11998.129 - 12048.542: 83.9945% ( 30) 00:07:50.805 12048.542 - 12098.954: 84.2455% ( 31) 00:07:50.805 12098.954 - 12149.366: 84.4479% ( 25) 00:07:50.805 12149.366 - 12199.778: 84.6260% ( 22) 00:07:50.805 12199.778 - 12250.191: 84.8931% ( 33) 00:07:50.805 12250.191 - 12300.603: 85.1927% ( 37) 00:07:50.805 12300.603 - 12351.015: 85.3870% ( 24) 00:07:50.805 12351.015 - 12401.428: 85.7189% ( 41) 00:07:50.805 12401.428 - 12451.840: 85.9699% ( 31) 00:07:50.805 12451.840 - 12502.252: 86.1723% ( 25) 00:07:50.805 12502.252 - 12552.665: 86.3909% ( 27) 00:07:50.805 12552.665 - 12603.077: 86.5771% ( 23) 00:07:50.805 12603.077 - 12653.489: 86.7795% ( 25) 00:07:50.805 12653.489 - 12703.902: 86.9900% ( 26) 00:07:50.805 12703.902 - 12754.314: 87.2733% ( 35) 00:07:50.805 12754.314 - 12804.726: 87.5081% ( 29) 00:07:50.805 12804.726 - 12855.138: 87.6538% ( 18) 00:07:50.805 12855.138 - 12905.551: 87.8157% ( 20) 00:07:50.805 12905.551 - 13006.375: 88.2448% ( 53) 00:07:50.805 13006.375 - 13107.200: 88.5120% ( 33) 00:07:50.805 13107.200 - 13208.025: 88.7144% ( 25) 00:07:50.805 13208.025 - 13308.849: 88.8682% ( 19) 00:07:50.805 13308.849 - 13409.674: 89.0949% ( 28) 00:07:50.805 13409.674 - 13510.498: 89.4187% ( 40) 00:07:50.805 13510.498 - 13611.323: 90.0826% ( 82) 00:07:50.805 13611.323 - 13712.148: 90.4631% ( 47) 00:07:50.805 13712.148 - 13812.972: 90.8679% ( 50) 00:07:50.805 13812.972 - 13913.797: 91.4427% ( 71) 00:07:50.805 13913.797 - 14014.622: 91.9203% ( 59) 00:07:50.805 14014.622 - 14115.446: 92.4223% ( 62) 00:07:50.805 14115.446 - 14216.271: 93.0214% ( 74) 00:07:50.805 14216.271 - 14317.095: 93.9038% ( 109) 00:07:50.805 14317.095 - 14417.920: 94.4543% ( 68) 00:07:50.805 14417.920 - 14518.745: 94.8915% ( 54) 00:07:50.805 14518.745 - 14619.569: 95.3206% ( 53) 00:07:50.805 14619.569 - 14720.394: 95.8144% ( 61) 00:07:50.805 14720.394 - 14821.218: 96.2759% ( 57) 00:07:50.805 14821.218 - 14922.043: 96.6240% ( 43) 00:07:50.805 14922.043 - 15022.868: 97.0045% ( 47) 00:07:50.805 15022.868 - 15123.692: 97.3527% ( 43) 00:07:50.805 15123.692 - 15224.517: 97.6603% ( 38) 00:07:50.805 15224.517 - 15325.342: 97.9275% ( 33) 00:07:50.805 15325.342 - 15426.166: 98.1703% ( 30) 00:07:50.805 15426.166 - 15526.991: 98.3808% ( 26) 00:07:50.805 15526.991 - 15627.815: 98.4942% ( 14) 00:07:50.805 15627.815 - 15728.640: 98.5751% ( 10) 00:07:50.805 15728.640 - 15829.465: 98.6156% ( 5) 00:07:50.805 15829.465 - 15930.289: 98.7370% ( 15) 00:07:50.805 15930.289 - 16031.114: 98.9233% ( 23) 00:07:50.805 16031.114 - 16131.938: 98.9556% ( 4) 00:07:50.805 16232.763 - 16333.588: 98.9637% ( 1) 00:07:50.805 22887.188 - 22988.012: 98.9799% ( 2) 00:07:50.805 22988.012 - 23088.837: 99.0042% ( 3) 00:07:50.805 23088.837 - 23189.662: 99.0285% ( 3) 00:07:50.806 23189.662 - 23290.486: 99.0528% ( 3) 00:07:50.806 23290.486 - 23391.311: 99.0852% ( 4) 00:07:50.806 23391.311 - 23492.135: 99.1095% ( 3) 00:07:50.806 23492.135 - 23592.960: 99.1337% ( 3) 00:07:50.806 23592.960 - 23693.785: 99.1580% ( 3) 00:07:50.806 23693.785 - 23794.609: 99.1823% ( 3) 00:07:50.806 23794.609 - 23895.434: 99.2066% ( 3) 00:07:50.806 23895.434 - 23996.258: 99.2390% ( 4) 00:07:50.806 23996.258 - 24097.083: 99.2714% ( 4) 00:07:50.806 24097.083 - 24197.908: 99.3038% ( 4) 00:07:50.806 24197.908 - 24298.732: 99.3361% ( 4) 00:07:50.806 24298.732 - 24399.557: 99.3685% ( 4) 00:07:50.806 24399.557 - 24500.382: 99.3928% ( 3) 00:07:50.806 24500.382 - 24601.206: 99.4252% ( 4) 00:07:50.806 24601.206 - 24702.031: 99.4576% ( 4) 00:07:50.806 24702.031 - 24802.855: 99.4819% ( 3) 00:07:50.806 30449.034 - 30650.683: 99.5142% ( 4) 00:07:50.806 30650.683 - 30852.332: 99.5871% ( 9) 00:07:50.806 30852.332 - 31053.982: 99.6519% ( 8) 00:07:50.806 31053.982 - 31255.631: 99.7166% ( 8) 00:07:50.806 31255.631 - 31457.280: 99.7814% ( 8) 00:07:50.806 31457.280 - 31658.929: 99.8462% ( 8) 00:07:50.806 31658.929 - 31860.578: 99.9109% ( 8) 00:07:50.806 31860.578 - 32062.228: 99.9757% ( 8) 00:07:50.806 32062.228 - 32263.877: 100.0000% ( 3) 00:07:50.806 00:07:50.806 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.806 ============================================================================== 00:07:50.806 Range in us Cumulative IO count 00:07:50.806 5444.529 - 5469.735: 0.0081% ( 1) 00:07:50.806 5570.560 - 5595.766: 0.0243% ( 2) 00:07:50.806 5620.972 - 5646.178: 0.0324% ( 1) 00:07:50.806 5671.385 - 5696.591: 0.0486% ( 2) 00:07:50.806 5696.591 - 5721.797: 0.0648% ( 2) 00:07:50.806 5721.797 - 5747.003: 0.0891% ( 3) 00:07:50.806 5747.003 - 5772.209: 0.1214% ( 4) 00:07:50.806 5772.209 - 5797.415: 0.1700% ( 6) 00:07:50.806 5797.415 - 5822.622: 0.2267% ( 7) 00:07:50.806 5822.622 - 5847.828: 0.3157% ( 11) 00:07:50.806 5847.828 - 5873.034: 0.3805% ( 8) 00:07:50.806 5873.034 - 5898.240: 0.7772% ( 49) 00:07:50.806 5898.240 - 5923.446: 1.1982% ( 52) 00:07:50.806 5923.446 - 5948.652: 1.3763% ( 22) 00:07:50.806 5948.652 - 5973.858: 1.7973% ( 52) 00:07:50.806 5973.858 - 5999.065: 2.2587% ( 57) 00:07:50.806 5999.065 - 6024.271: 2.9388% ( 84) 00:07:50.806 6024.271 - 6049.477: 3.2950% ( 44) 00:07:50.806 6049.477 - 6074.683: 3.6108% ( 39) 00:07:50.806 6074.683 - 6099.889: 3.8536% ( 30) 00:07:50.806 6099.889 - 6125.095: 4.0803% ( 28) 00:07:50.806 6125.095 - 6150.302: 4.3718% ( 36) 00:07:50.806 6150.302 - 6175.508: 4.4932% ( 15) 00:07:50.806 6175.508 - 6200.714: 4.6227% ( 16) 00:07:50.806 6200.714 - 6225.920: 4.6875% ( 8) 00:07:50.806 6225.920 - 6251.126: 4.8737% ( 23) 00:07:50.806 6251.126 - 6276.332: 5.1085% ( 29) 00:07:50.806 6276.332 - 6301.538: 5.2218% ( 14) 00:07:50.806 6301.538 - 6326.745: 5.3595% ( 17) 00:07:50.806 6326.745 - 6351.951: 5.4566% ( 12) 00:07:50.806 6351.951 - 6377.157: 5.5538% ( 12) 00:07:50.806 6377.157 - 6402.363: 5.6995% ( 18) 00:07:50.806 6402.363 - 6427.569: 5.9747% ( 34) 00:07:50.806 6427.569 - 6452.775: 6.2257% ( 31) 00:07:50.806 6452.775 - 6503.188: 6.5900% ( 45) 00:07:50.806 6503.188 - 6553.600: 6.8410% ( 31) 00:07:50.806 6553.600 - 6604.012: 7.3106% ( 58) 00:07:50.806 6604.012 - 6654.425: 7.6263% ( 39) 00:07:50.806 6654.425 - 6704.837: 7.7801% ( 19) 00:07:50.806 6704.837 - 6755.249: 7.9582% ( 22) 00:07:50.806 6755.249 - 6805.662: 8.2173% ( 32) 00:07:50.806 6805.662 - 6856.074: 8.4035% ( 23) 00:07:50.806 6856.074 - 6906.486: 8.5573% ( 19) 00:07:50.806 6906.486 - 6956.898: 8.7516% ( 24) 00:07:50.806 6956.898 - 7007.311: 9.1888% ( 54) 00:07:50.806 7007.311 - 7057.723: 9.4722% ( 35) 00:07:50.806 7057.723 - 7108.135: 9.7312% ( 32) 00:07:50.806 7108.135 - 7158.548: 10.1279% ( 49) 00:07:50.806 7158.548 - 7208.960: 10.6137% ( 60) 00:07:50.806 7208.960 - 7259.372: 10.7756% ( 20) 00:07:50.806 7259.372 - 7309.785: 10.9537% ( 22) 00:07:50.806 7309.785 - 7360.197: 11.2694% ( 39) 00:07:50.806 7360.197 - 7410.609: 11.7957% ( 65) 00:07:50.806 7410.609 - 7461.022: 11.9495% ( 19) 00:07:50.806 7461.022 - 7511.434: 12.1843% ( 29) 00:07:50.806 7511.434 - 7561.846: 12.4271% ( 30) 00:07:50.806 7561.846 - 7612.258: 12.6619% ( 29) 00:07:50.806 7612.258 - 7662.671: 12.9858% ( 40) 00:07:50.806 7662.671 - 7713.083: 13.1962% ( 26) 00:07:50.806 7713.083 - 7763.495: 13.5767% ( 47) 00:07:50.806 7763.495 - 7813.908: 13.7872% ( 26) 00:07:50.806 7813.908 - 7864.320: 14.0058% ( 27) 00:07:50.806 7864.320 - 7914.732: 14.3216% ( 39) 00:07:50.806 7914.732 - 7965.145: 14.4430% ( 15) 00:07:50.806 7965.145 - 8015.557: 14.5483% ( 13) 00:07:50.806 8015.557 - 8065.969: 14.6292% ( 10) 00:07:50.806 8065.969 - 8116.382: 14.7830% ( 19) 00:07:50.806 8116.382 - 8166.794: 14.9611% ( 22) 00:07:50.806 8166.794 - 8217.206: 15.1473% ( 23) 00:07:50.806 8217.206 - 8267.618: 15.7060% ( 69) 00:07:50.806 8267.618 - 8318.031: 16.3617% ( 81) 00:07:50.806 8318.031 - 8368.443: 16.6289% ( 33) 00:07:50.806 8368.443 - 8418.855: 16.9608% ( 41) 00:07:50.806 8418.855 - 8469.268: 17.2037% ( 30) 00:07:50.806 8469.268 - 8519.680: 17.4142% ( 26) 00:07:50.806 8519.680 - 8570.092: 17.6813% ( 33) 00:07:50.806 8570.092 - 8620.505: 17.9485% ( 33) 00:07:50.806 8620.505 - 8670.917: 18.2481% ( 37) 00:07:50.806 8670.917 - 8721.329: 18.8067% ( 69) 00:07:50.806 8721.329 - 8771.742: 19.2519% ( 55) 00:07:50.806 8771.742 - 8822.154: 19.7620% ( 63) 00:07:50.806 8822.154 - 8872.566: 20.1830% ( 52) 00:07:50.806 8872.566 - 8922.978: 20.6282% ( 55) 00:07:50.806 8922.978 - 8973.391: 21.1626% ( 66) 00:07:50.806 8973.391 - 9023.803: 21.7778% ( 76) 00:07:50.806 9023.803 - 9074.215: 22.4903% ( 88) 00:07:50.806 9074.215 - 9124.628: 23.0165% ( 65) 00:07:50.806 9124.628 - 9175.040: 23.5832% ( 70) 00:07:50.806 9175.040 - 9225.452: 24.3523% ( 95) 00:07:50.806 9225.452 - 9275.865: 25.3481% ( 123) 00:07:50.806 9275.865 - 9326.277: 26.2791% ( 115) 00:07:50.806 9326.277 - 9376.689: 27.4045% ( 139) 00:07:50.806 9376.689 - 9427.102: 28.9103% ( 186) 00:07:50.806 9427.102 - 9477.514: 30.3514% ( 178) 00:07:50.806 9477.514 - 9527.926: 31.6548% ( 161) 00:07:50.806 9527.926 - 9578.338: 32.9177% ( 156) 00:07:50.806 9578.338 - 9628.751: 34.3507% ( 177) 00:07:50.806 9628.751 - 9679.163: 35.7189% ( 169) 00:07:50.806 9679.163 - 9729.575: 36.9900% ( 157) 00:07:50.806 9729.575 - 9779.988: 38.1639% ( 145) 00:07:50.806 9779.988 - 9830.400: 39.3135% ( 142) 00:07:50.806 9830.400 - 9880.812: 40.5521% ( 153) 00:07:50.806 9880.812 - 9931.225: 42.1065% ( 192) 00:07:50.806 9931.225 - 9981.637: 43.8229% ( 212) 00:07:50.806 9981.637 - 10032.049: 45.4987% ( 207) 00:07:50.806 10032.049 - 10082.462: 47.6198% ( 262) 00:07:50.806 10082.462 - 10132.874: 49.6276% ( 248) 00:07:50.806 10132.874 - 10183.286: 51.5139% ( 233) 00:07:50.806 10183.286 - 10233.698: 53.0359% ( 188) 00:07:50.806 10233.698 - 10284.111: 54.5094% ( 182) 00:07:50.806 10284.111 - 10334.523: 55.8776% ( 169) 00:07:50.806 10334.523 - 10384.935: 57.4239% ( 191) 00:07:50.806 10384.935 - 10435.348: 58.9945% ( 194) 00:07:50.806 10435.348 - 10485.760: 60.4275% ( 177) 00:07:50.806 10485.760 - 10536.172: 61.9576% ( 189) 00:07:50.806 10536.172 - 10586.585: 63.5120% ( 192) 00:07:50.806 10586.585 - 10636.997: 64.9449% ( 177) 00:07:50.806 10636.997 - 10687.409: 66.4913% ( 191) 00:07:50.806 10687.409 - 10737.822: 67.9728% ( 183) 00:07:50.806 10737.822 - 10788.234: 69.5353% ( 193) 00:07:50.806 10788.234 - 10838.646: 71.2759% ( 215) 00:07:50.806 10838.646 - 10889.058: 72.8546% ( 195) 00:07:50.806 10889.058 - 10939.471: 73.8261% ( 120) 00:07:50.806 10939.471 - 10989.883: 74.7409% ( 113) 00:07:50.806 10989.883 - 11040.295: 75.3967% ( 81) 00:07:50.806 11040.295 - 11090.708: 76.0363% ( 79) 00:07:50.806 11090.708 - 11141.120: 76.5625% ( 65) 00:07:50.806 11141.120 - 11191.532: 77.0725% ( 63) 00:07:50.806 11191.532 - 11241.945: 77.7769% ( 87) 00:07:50.806 11241.945 - 11292.357: 78.6108% ( 103) 00:07:50.806 11292.357 - 11342.769: 79.1613% ( 68) 00:07:50.806 11342.769 - 11393.182: 79.6470% ( 60) 00:07:50.806 11393.182 - 11443.594: 80.0599% ( 51) 00:07:50.806 11443.594 - 11494.006: 80.6104% ( 68) 00:07:50.806 11494.006 - 11544.418: 81.0233% ( 51) 00:07:50.806 11544.418 - 11594.831: 81.3391% ( 39) 00:07:50.806 11594.831 - 11645.243: 81.7034% ( 45) 00:07:50.806 11645.243 - 11695.655: 81.9786% ( 34) 00:07:50.806 11695.655 - 11746.068: 82.2134% ( 29) 00:07:50.806 11746.068 - 11796.480: 82.5372% ( 40) 00:07:50.806 11796.480 - 11846.892: 82.8854% ( 43) 00:07:50.806 11846.892 - 11897.305: 83.1606% ( 34) 00:07:50.806 11897.305 - 11947.717: 83.5249% ( 45) 00:07:50.806 11947.717 - 11998.129: 83.8326% ( 38) 00:07:50.806 11998.129 - 12048.542: 84.1564% ( 40) 00:07:50.806 12048.542 - 12098.954: 84.4317% ( 34) 00:07:50.806 12098.954 - 12149.366: 84.6179% ( 23) 00:07:50.806 12149.366 - 12199.778: 84.7717% ( 19) 00:07:50.806 12199.778 - 12250.191: 84.9579% ( 23) 00:07:50.806 12250.191 - 12300.603: 85.1198% ( 20) 00:07:50.806 12300.603 - 12351.015: 85.3141% ( 24) 00:07:50.806 12351.015 - 12401.428: 85.4922% ( 22) 00:07:50.806 12401.428 - 12451.840: 85.7108% ( 27) 00:07:50.806 12451.840 - 12502.252: 86.0023% ( 36) 00:07:50.806 12502.252 - 12552.665: 86.2694% ( 33) 00:07:50.806 12552.665 - 12603.077: 86.5609% ( 36) 00:07:50.806 12603.077 - 12653.489: 86.8119% ( 31) 00:07:50.806 12653.489 - 12703.902: 86.9738% ( 20) 00:07:50.806 12703.902 - 12754.314: 87.1438% ( 21) 00:07:50.806 12754.314 - 12804.726: 87.3381% ( 24) 00:07:50.806 12804.726 - 12855.138: 87.6052% ( 33) 00:07:50.806 12855.138 - 12905.551: 87.7915% ( 23) 00:07:50.807 12905.551 - 13006.375: 88.0343% ( 30) 00:07:50.807 13006.375 - 13107.200: 88.4391% ( 50) 00:07:50.807 13107.200 - 13208.025: 88.7872% ( 43) 00:07:50.807 13208.025 - 13308.849: 89.2325% ( 55) 00:07:50.807 13308.849 - 13409.674: 89.7668% ( 66) 00:07:50.807 13409.674 - 13510.498: 90.2931% ( 65) 00:07:50.807 13510.498 - 13611.323: 90.9003% ( 75) 00:07:50.807 13611.323 - 13712.148: 91.2970% ( 49) 00:07:50.807 13712.148 - 13812.972: 91.6289% ( 41) 00:07:50.807 13812.972 - 13913.797: 91.9365% ( 38) 00:07:50.807 13913.797 - 14014.622: 92.1308% ( 24) 00:07:50.807 14014.622 - 14115.446: 92.3251% ( 24) 00:07:50.807 14115.446 - 14216.271: 92.5842% ( 32) 00:07:50.807 14216.271 - 14317.095: 93.1023% ( 64) 00:07:50.807 14317.095 - 14417.920: 93.6205% ( 64) 00:07:50.807 14417.920 - 14518.745: 94.2843% ( 82) 00:07:50.807 14518.745 - 14619.569: 94.7863% ( 62) 00:07:50.807 14619.569 - 14720.394: 95.4420% ( 81) 00:07:50.807 14720.394 - 14821.218: 96.0978% ( 81) 00:07:50.807 14821.218 - 14922.043: 96.5835% ( 60) 00:07:50.807 14922.043 - 15022.868: 97.1179% ( 66) 00:07:50.807 15022.868 - 15123.692: 97.7089% ( 73) 00:07:50.807 15123.692 - 15224.517: 97.9760% ( 33) 00:07:50.807 15224.517 - 15325.342: 98.1703% ( 24) 00:07:50.807 15325.342 - 15426.166: 98.3403% ( 21) 00:07:50.807 15426.166 - 15526.991: 98.4456% ( 13) 00:07:50.807 15526.991 - 15627.815: 98.4942% ( 6) 00:07:50.807 15627.815 - 15728.640: 98.5670% ( 9) 00:07:50.807 15728.640 - 15829.465: 98.6642% ( 12) 00:07:50.807 15829.465 - 15930.289: 98.9071% ( 30) 00:07:50.807 15930.289 - 16031.114: 98.9394% ( 4) 00:07:50.807 16031.114 - 16131.938: 98.9637% ( 3) 00:07:50.807 21677.292 - 21778.117: 98.9880% ( 3) 00:07:50.807 21778.117 - 21878.942: 99.0204% ( 4) 00:07:50.807 21878.942 - 21979.766: 99.0528% ( 4) 00:07:50.807 21979.766 - 22080.591: 99.0852% ( 4) 00:07:50.807 22080.591 - 22181.415: 99.1176% ( 4) 00:07:50.807 22181.415 - 22282.240: 99.1499% ( 4) 00:07:50.807 22282.240 - 22383.065: 99.1823% ( 4) 00:07:50.807 22383.065 - 22483.889: 99.2147% ( 4) 00:07:50.807 22483.889 - 22584.714: 99.2552% ( 5) 00:07:50.807 22584.714 - 22685.538: 99.2876% ( 4) 00:07:50.807 22685.538 - 22786.363: 99.3199% ( 4) 00:07:50.807 22786.363 - 22887.188: 99.3523% ( 4) 00:07:50.807 22887.188 - 22988.012: 99.3847% ( 4) 00:07:50.807 22988.012 - 23088.837: 99.4171% ( 4) 00:07:50.807 23088.837 - 23189.662: 99.4576% ( 5) 00:07:50.807 23189.662 - 23290.486: 99.4819% ( 3) 00:07:50.807 29440.788 - 29642.437: 99.5223% ( 5) 00:07:50.807 29642.437 - 29844.086: 99.5790% ( 7) 00:07:50.807 29844.086 - 30045.735: 99.6276% ( 6) 00:07:50.807 30045.735 - 30247.385: 99.6924% ( 8) 00:07:50.807 30247.385 - 30449.034: 99.7490% ( 7) 00:07:50.807 30449.034 - 30650.683: 99.8138% ( 8) 00:07:50.807 30650.683 - 30852.332: 99.8786% ( 8) 00:07:50.807 30852.332 - 31053.982: 99.9433% ( 8) 00:07:50.807 31053.982 - 31255.631: 100.0000% ( 7) 00:07:50.807 00:07:50.807 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.807 ============================================================================== 00:07:50.807 Range in us Cumulative IO count 00:07:50.807 5595.766 - 5620.972: 0.0161% ( 2) 00:07:50.807 5671.385 - 5696.591: 0.0403% ( 3) 00:07:50.807 5696.591 - 5721.797: 0.0725% ( 4) 00:07:50.807 5721.797 - 5747.003: 0.1128% ( 5) 00:07:50.807 5747.003 - 5772.209: 0.1530% ( 5) 00:07:50.807 5772.209 - 5797.415: 0.2014% ( 6) 00:07:50.807 5797.415 - 5822.622: 0.2416% ( 5) 00:07:50.807 5822.622 - 5847.828: 0.3222% ( 10) 00:07:50.807 5847.828 - 5873.034: 0.4510% ( 16) 00:07:50.807 5873.034 - 5898.240: 0.5960% ( 18) 00:07:50.807 5898.240 - 5923.446: 1.0873% ( 61) 00:07:50.807 5923.446 - 5948.652: 1.3370% ( 31) 00:07:50.807 5948.652 - 5973.858: 1.5142% ( 22) 00:07:50.807 5973.858 - 5999.065: 2.2793% ( 95) 00:07:50.807 5999.065 - 6024.271: 2.8753% ( 74) 00:07:50.807 6024.271 - 6049.477: 3.3102% ( 54) 00:07:50.807 6049.477 - 6074.683: 3.6727% ( 45) 00:07:50.807 6074.683 - 6099.889: 3.8418% ( 21) 00:07:50.807 6099.889 - 6125.095: 4.0271% ( 23) 00:07:50.807 6125.095 - 6150.302: 4.3492% ( 40) 00:07:50.807 6150.302 - 6175.508: 4.4620% ( 14) 00:07:50.807 6175.508 - 6200.714: 4.5909% ( 16) 00:07:50.807 6200.714 - 6225.920: 4.8244% ( 29) 00:07:50.807 6225.920 - 6251.126: 4.9936% ( 21) 00:07:50.807 6251.126 - 6276.332: 5.0499% ( 7) 00:07:50.807 6276.332 - 6301.538: 5.0902% ( 5) 00:07:50.807 6301.538 - 6326.745: 5.1627% ( 9) 00:07:50.807 6326.745 - 6351.951: 5.2593% ( 12) 00:07:50.807 6351.951 - 6377.157: 5.3802% ( 15) 00:07:50.807 6377.157 - 6402.363: 5.5090% ( 16) 00:07:50.807 6402.363 - 6427.569: 5.6218% ( 14) 00:07:50.807 6427.569 - 6452.775: 5.7748% ( 19) 00:07:50.807 6452.775 - 6503.188: 6.5077% ( 91) 00:07:50.807 6503.188 - 6553.600: 7.1682% ( 82) 00:07:50.807 6553.600 - 6604.012: 7.4903% ( 40) 00:07:50.807 6604.012 - 6654.425: 7.8528% ( 45) 00:07:50.807 6654.425 - 6704.837: 8.0461% ( 24) 00:07:50.807 6704.837 - 6755.249: 8.1830% ( 17) 00:07:50.807 6755.249 - 6805.662: 8.2716% ( 11) 00:07:50.807 6805.662 - 6856.074: 8.3602% ( 11) 00:07:50.807 6856.074 - 6906.486: 8.6582% ( 37) 00:07:50.807 6906.486 - 6956.898: 8.6904% ( 4) 00:07:50.807 6956.898 - 7007.311: 8.8273% ( 17) 00:07:50.807 7007.311 - 7057.723: 8.9965% ( 21) 00:07:50.807 7057.723 - 7108.135: 9.2542% ( 32) 00:07:50.807 7108.135 - 7158.548: 9.7697% ( 64) 00:07:50.807 7158.548 - 7208.960: 10.1885% ( 52) 00:07:50.807 7208.960 - 7259.372: 10.5751% ( 48) 00:07:50.807 7259.372 - 7309.785: 10.9214% ( 43) 00:07:50.807 7309.785 - 7360.197: 11.1147% ( 24) 00:07:50.807 7360.197 - 7410.609: 11.2758% ( 20) 00:07:50.807 7410.609 - 7461.022: 11.4046% ( 16) 00:07:50.807 7461.022 - 7511.434: 11.6060% ( 25) 00:07:50.807 7511.434 - 7561.846: 11.8637% ( 32) 00:07:50.807 7561.846 - 7612.258: 12.0248% ( 20) 00:07:50.807 7612.258 - 7662.671: 12.2181% ( 24) 00:07:50.807 7662.671 - 7713.083: 12.4034% ( 23) 00:07:50.807 7713.083 - 7763.495: 12.6933% ( 36) 00:07:50.807 7763.495 - 7813.908: 12.8061% ( 14) 00:07:50.807 7813.908 - 7864.320: 12.9188% ( 14) 00:07:50.807 7864.320 - 7914.732: 13.3457% ( 53) 00:07:50.807 7914.732 - 7965.145: 13.6276% ( 35) 00:07:50.807 7965.145 - 8015.557: 13.9659% ( 42) 00:07:50.807 8015.557 - 8065.969: 14.5619% ( 74) 00:07:50.807 8065.969 - 8116.382: 15.1498% ( 73) 00:07:50.807 8116.382 - 8166.794: 15.3834% ( 29) 00:07:50.807 8166.794 - 8217.206: 15.7297% ( 43) 00:07:50.807 8217.206 - 8267.618: 16.1324% ( 50) 00:07:50.807 8267.618 - 8318.031: 16.3660% ( 29) 00:07:50.807 8318.031 - 8368.443: 16.5673% ( 25) 00:07:50.807 8368.443 - 8418.855: 16.7445% ( 22) 00:07:50.807 8418.855 - 8469.268: 16.8976% ( 19) 00:07:50.807 8469.268 - 8519.680: 17.1150% ( 27) 00:07:50.807 8519.680 - 8570.092: 17.3808% ( 33) 00:07:50.807 8570.092 - 8620.505: 17.9446% ( 70) 00:07:50.807 8620.505 - 8670.917: 18.3715% ( 53) 00:07:50.807 8670.917 - 8721.329: 18.7742% ( 50) 00:07:50.807 8721.329 - 8771.742: 19.0963% ( 40) 00:07:50.807 8771.742 - 8822.154: 19.3863% ( 36) 00:07:50.807 8822.154 - 8872.566: 19.7890% ( 50) 00:07:50.807 8872.566 - 8922.978: 20.1917% ( 50) 00:07:50.807 8922.978 - 8973.391: 20.6105% ( 52) 00:07:50.807 8973.391 - 9023.803: 21.1179% ( 63) 00:07:50.807 9023.803 - 9074.215: 21.6736% ( 69) 00:07:50.807 9074.215 - 9124.628: 22.2938% ( 77) 00:07:50.807 9124.628 - 9175.040: 23.0187% ( 90) 00:07:50.807 9175.040 - 9225.452: 23.8402% ( 102) 00:07:50.807 9225.452 - 9275.865: 24.8792% ( 129) 00:07:50.807 9275.865 - 9326.277: 26.3692% ( 185) 00:07:50.807 9326.277 - 9376.689: 27.6659% ( 161) 00:07:50.807 9376.689 - 9427.102: 29.2848% ( 201) 00:07:50.807 9427.102 - 9477.514: 31.0164% ( 215) 00:07:50.807 9477.514 - 9527.926: 32.4742% ( 181) 00:07:50.807 9527.926 - 9578.338: 33.7790% ( 162) 00:07:50.807 9578.338 - 9628.751: 35.0032% ( 152) 00:07:50.807 9628.751 - 9679.163: 36.4288% ( 177) 00:07:50.807 9679.163 - 9729.575: 37.9430% ( 188) 00:07:50.807 9729.575 - 9779.988: 39.4974% ( 193) 00:07:50.807 9779.988 - 9830.400: 41.2613% ( 219) 00:07:50.807 9830.400 - 9880.812: 42.5338% ( 158) 00:07:50.807 9880.812 - 9931.225: 43.8950% ( 169) 00:07:50.807 9931.225 - 9981.637: 45.1836% ( 160) 00:07:50.807 9981.637 - 10032.049: 46.7139% ( 190) 00:07:50.807 10032.049 - 10082.462: 48.6389% ( 239) 00:07:50.807 10082.462 - 10132.874: 50.1691% ( 190) 00:07:50.807 10132.874 - 10183.286: 51.4417% ( 158) 00:07:50.807 10183.286 - 10233.698: 53.3425% ( 236) 00:07:50.807 10233.698 - 10284.111: 54.8244% ( 184) 00:07:50.807 10284.111 - 10334.523: 56.2339% ( 175) 00:07:50.807 10334.523 - 10384.935: 57.5870% ( 168) 00:07:50.807 10384.935 - 10435.348: 58.8515% ( 157) 00:07:50.807 10435.348 - 10485.760: 60.0435% ( 148) 00:07:50.807 10485.760 - 10536.172: 61.1147% ( 133) 00:07:50.807 10536.172 - 10586.585: 62.4034% ( 160) 00:07:50.807 10586.585 - 10636.997: 63.9739% ( 195) 00:07:50.807 10636.997 - 10687.409: 65.6975% ( 214) 00:07:50.807 10687.409 - 10737.822: 67.5499% ( 230) 00:07:50.807 10737.822 - 10788.234: 69.1769% ( 202) 00:07:50.807 10788.234 - 10838.646: 70.8360% ( 206) 00:07:50.807 10838.646 - 10889.058: 72.4630% ( 202) 00:07:50.807 10889.058 - 10939.471: 74.4684% ( 249) 00:07:50.807 10939.471 - 10989.883: 75.5155% ( 130) 00:07:50.807 10989.883 - 11040.295: 76.1115% ( 74) 00:07:50.807 11040.295 - 11090.708: 76.6591% ( 68) 00:07:50.807 11090.708 - 11141.120: 77.3357% ( 84) 00:07:50.807 11141.120 - 11191.532: 77.8109% ( 59) 00:07:50.807 11191.532 - 11241.945: 78.2136% ( 50) 00:07:50.807 11241.945 - 11292.357: 78.5760% ( 45) 00:07:50.807 11292.357 - 11342.769: 78.9707% ( 49) 00:07:50.807 11342.769 - 11393.182: 79.3331% ( 45) 00:07:50.808 11393.182 - 11443.594: 79.7519% ( 52) 00:07:50.808 11443.594 - 11494.006: 80.0419% ( 36) 00:07:50.808 11494.006 - 11544.418: 80.3963% ( 44) 00:07:50.808 11544.418 - 11594.831: 80.7184% ( 40) 00:07:50.808 11594.831 - 11645.243: 80.9198% ( 25) 00:07:50.808 11645.243 - 11695.655: 81.1131% ( 24) 00:07:50.808 11695.655 - 11746.068: 81.3064% ( 24) 00:07:50.808 11746.068 - 11796.480: 81.6608% ( 44) 00:07:50.808 11796.480 - 11846.892: 81.8541% ( 24) 00:07:50.808 11846.892 - 11897.305: 82.0796% ( 28) 00:07:50.808 11897.305 - 11947.717: 82.3212% ( 30) 00:07:50.808 11947.717 - 11998.129: 82.5870% ( 33) 00:07:50.808 11998.129 - 12048.542: 83.0139% ( 53) 00:07:50.808 12048.542 - 12098.954: 83.4327% ( 52) 00:07:50.808 12098.954 - 12149.366: 83.9159% ( 60) 00:07:50.808 12149.366 - 12199.778: 84.2220% ( 38) 00:07:50.808 12199.778 - 12250.191: 84.6569% ( 54) 00:07:50.808 12250.191 - 12300.603: 85.1240% ( 58) 00:07:50.808 12300.603 - 12351.015: 85.5348% ( 51) 00:07:50.808 12351.015 - 12401.428: 86.0583% ( 65) 00:07:50.808 12401.428 - 12451.840: 86.3644% ( 38) 00:07:50.808 12451.840 - 12502.252: 86.6865% ( 40) 00:07:50.808 12502.252 - 12552.665: 86.9604% ( 34) 00:07:50.808 12552.665 - 12603.077: 87.2986% ( 42) 00:07:50.808 12603.077 - 12653.489: 87.4758% ( 22) 00:07:50.808 12653.489 - 12703.902: 87.6450% ( 21) 00:07:50.808 12703.902 - 12754.314: 87.8383% ( 24) 00:07:50.808 12754.314 - 12804.726: 88.0155% ( 22) 00:07:50.808 12804.726 - 12855.138: 88.1604% ( 18) 00:07:50.808 12855.138 - 12905.551: 88.3054% ( 18) 00:07:50.808 12905.551 - 13006.375: 88.6678% ( 45) 00:07:50.808 13006.375 - 13107.200: 89.0142% ( 43) 00:07:50.808 13107.200 - 13208.025: 89.2477% ( 29) 00:07:50.808 13208.025 - 13308.849: 89.4813% ( 29) 00:07:50.808 13308.849 - 13409.674: 89.7713% ( 36) 00:07:50.808 13409.674 - 13510.498: 90.0854% ( 39) 00:07:50.808 13510.498 - 13611.323: 90.5525% ( 58) 00:07:50.808 13611.323 - 13712.148: 90.8827% ( 41) 00:07:50.808 13712.148 - 13812.972: 91.2613% ( 47) 00:07:50.808 13812.972 - 13913.797: 91.6237% ( 45) 00:07:50.808 13913.797 - 14014.622: 91.9137% ( 36) 00:07:50.808 14014.622 - 14115.446: 92.4291% ( 64) 00:07:50.808 14115.446 - 14216.271: 92.6707% ( 30) 00:07:50.808 14216.271 - 14317.095: 93.0573% ( 48) 00:07:50.808 14317.095 - 14417.920: 93.4842% ( 53) 00:07:50.808 14417.920 - 14518.745: 93.8708% ( 48) 00:07:50.808 14518.745 - 14619.569: 94.3863% ( 64) 00:07:50.808 14619.569 - 14720.394: 94.8695% ( 60) 00:07:50.808 14720.394 - 14821.218: 95.2320% ( 45) 00:07:50.808 14821.218 - 14922.043: 95.6910% ( 57) 00:07:50.808 14922.043 - 15022.868: 96.2146% ( 65) 00:07:50.808 15022.868 - 15123.692: 97.1569% ( 117) 00:07:50.808 15123.692 - 15224.517: 97.8657% ( 88) 00:07:50.808 15224.517 - 15325.342: 98.2925% ( 53) 00:07:50.808 15325.342 - 15426.166: 98.5986% ( 38) 00:07:50.808 15426.166 - 15526.991: 98.7838% ( 23) 00:07:50.808 15526.991 - 15627.815: 98.9610% ( 22) 00:07:50.808 15627.815 - 15728.640: 99.1140% ( 19) 00:07:50.808 15728.640 - 15829.465: 99.2026% ( 11) 00:07:50.808 15829.465 - 15930.289: 99.2751% ( 9) 00:07:50.808 15930.289 - 16031.114: 99.3557% ( 10) 00:07:50.808 16031.114 - 16131.938: 99.4282% ( 9) 00:07:50.808 16131.938 - 16232.763: 99.4765% ( 6) 00:07:50.808 16232.763 - 16333.588: 99.4845% ( 1) 00:07:50.808 21576.468 - 21677.292: 99.4926% ( 1) 00:07:50.808 21677.292 - 21778.117: 99.5248% ( 4) 00:07:50.808 21778.117 - 21878.942: 99.5570% ( 4) 00:07:50.808 21878.942 - 21979.766: 99.5892% ( 4) 00:07:50.808 21979.766 - 22080.591: 99.6295% ( 5) 00:07:50.808 22080.591 - 22181.415: 99.6617% ( 4) 00:07:50.808 22181.415 - 22282.240: 99.6939% ( 4) 00:07:50.808 22282.240 - 22383.065: 99.7262% ( 4) 00:07:50.808 22383.065 - 22483.889: 99.7584% ( 4) 00:07:50.808 22483.889 - 22584.714: 99.7906% ( 4) 00:07:50.808 22584.714 - 22685.538: 99.8228% ( 4) 00:07:50.808 22685.538 - 22786.363: 99.8550% ( 4) 00:07:50.808 22786.363 - 22887.188: 99.8872% ( 4) 00:07:50.808 22887.188 - 22988.012: 99.9195% ( 4) 00:07:50.808 22988.012 - 23088.837: 99.9517% ( 4) 00:07:50.808 23088.837 - 23189.662: 99.9839% ( 4) 00:07:50.808 23189.662 - 23290.486: 100.0000% ( 2) 00:07:50.808 00:07:50.808 09:36:18 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:50.808 00:07:50.808 real 0m2.570s 00:07:50.808 user 0m2.236s 00:07:50.808 sys 0m0.221s 00:07:50.808 09:36:18 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.808 09:36:18 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.808 ************************************ 00:07:50.808 END TEST nvme_perf 00:07:50.808 ************************************ 00:07:50.808 09:36:18 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:50.808 09:36:18 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:50.808 09:36:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.808 09:36:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.808 ************************************ 00:07:50.808 START TEST nvme_hello_world 00:07:50.808 ************************************ 00:07:50.808 09:36:18 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:50.808 Initializing NVMe Controllers 00:07:50.808 Attached to 0000:00:13.0 00:07:50.808 Namespace ID: 1 size: 1GB 00:07:50.808 Attached to 0000:00:10.0 00:07:50.808 Namespace ID: 1 size: 6GB 00:07:50.808 Attached to 0000:00:11.0 00:07:50.808 Namespace ID: 1 size: 5GB 00:07:50.808 Attached to 0000:00:12.0 00:07:50.808 Namespace ID: 1 size: 4GB 00:07:50.808 Namespace ID: 2 size: 4GB 00:07:50.808 Namespace ID: 3 size: 4GB 00:07:50.808 Initialization complete. 00:07:50.808 INFO: using host memory buffer for IO 00:07:50.808 Hello world! 00:07:50.808 INFO: using host memory buffer for IO 00:07:50.808 Hello world! 00:07:50.808 INFO: using host memory buffer for IO 00:07:50.808 Hello world! 00:07:50.808 INFO: using host memory buffer for IO 00:07:50.808 Hello world! 00:07:50.808 INFO: using host memory buffer for IO 00:07:50.808 Hello world! 00:07:50.808 INFO: using host memory buffer for IO 00:07:50.808 Hello world! 00:07:51.069 00:07:51.069 real 0m0.232s 00:07:51.069 user 0m0.080s 00:07:51.069 sys 0m0.106s 00:07:51.069 09:36:18 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.069 09:36:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:51.069 ************************************ 00:07:51.069 END TEST nvme_hello_world 00:07:51.069 ************************************ 00:07:51.069 09:36:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.069 09:36:18 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.069 09:36:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.069 09:36:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.069 ************************************ 00:07:51.069 START TEST nvme_sgl 00:07:51.069 ************************************ 00:07:51.069 09:36:18 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:51.069 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:51.069 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:51.069 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:51.069 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:51.069 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:51.069 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:51.069 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:51.069 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:51.069 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:51.069 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:51.069 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:51.330 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:51.330 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:51.330 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:51.330 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:51.330 NVMe Readv/Writev Request test 00:07:51.330 Attached to 0000:00:13.0 00:07:51.330 Attached to 0000:00:10.0 00:07:51.330 Attached to 0000:00:11.0 00:07:51.330 Attached to 0000:00:12.0 00:07:51.330 0000:00:10.0: build_io_request_2 test passed 00:07:51.330 0000:00:10.0: build_io_request_4 test passed 00:07:51.330 0000:00:10.0: build_io_request_5 test passed 00:07:51.330 0000:00:10.0: build_io_request_6 test passed 00:07:51.330 0000:00:10.0: build_io_request_7 test passed 00:07:51.330 0000:00:10.0: build_io_request_10 test passed 00:07:51.330 0000:00:11.0: build_io_request_2 test passed 00:07:51.330 0000:00:11.0: build_io_request_4 test passed 00:07:51.330 0000:00:11.0: build_io_request_5 test passed 00:07:51.330 0000:00:11.0: build_io_request_6 test passed 00:07:51.330 0000:00:11.0: build_io_request_7 test passed 00:07:51.330 0000:00:11.0: build_io_request_10 test passed 00:07:51.330 Cleaning up... 00:07:51.330 00:07:51.330 real 0m0.282s 00:07:51.330 user 0m0.145s 00:07:51.330 sys 0m0.101s 00:07:51.330 09:36:18 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.330 09:36:18 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:51.330 ************************************ 00:07:51.330 END TEST nvme_sgl 00:07:51.330 ************************************ 00:07:51.330 09:36:18 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:51.330 09:36:18 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.330 09:36:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.330 09:36:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.330 ************************************ 00:07:51.330 START TEST nvme_e2edp 00:07:51.330 ************************************ 00:07:51.330 09:36:18 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:51.591 NVMe Write/Read with End-to-End data protection test 00:07:51.591 Attached to 0000:00:13.0 00:07:51.591 Attached to 0000:00:10.0 00:07:51.591 Attached to 0000:00:11.0 00:07:51.591 Attached to 0000:00:12.0 00:07:51.591 Cleaning up... 00:07:51.591 00:07:51.591 real 0m0.222s 00:07:51.591 user 0m0.062s 00:07:51.591 sys 0m0.115s 00:07:51.591 09:36:19 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.591 09:36:19 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:51.591 ************************************ 00:07:51.591 END TEST nvme_e2edp 00:07:51.591 ************************************ 00:07:51.591 09:36:19 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:51.591 09:36:19 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.591 09:36:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.592 09:36:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.592 ************************************ 00:07:51.592 START TEST nvme_reserve 00:07:51.592 ************************************ 00:07:51.592 09:36:19 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:51.856 ===================================================== 00:07:51.856 NVMe Controller at PCI bus 0, device 19, function 0 00:07:51.856 ===================================================== 00:07:51.856 Reservations: Not Supported 00:07:51.856 ===================================================== 00:07:51.856 NVMe Controller at PCI bus 0, device 16, function 0 00:07:51.856 ===================================================== 00:07:51.856 Reservations: Not Supported 00:07:51.856 ===================================================== 00:07:51.856 NVMe Controller at PCI bus 0, device 17, function 0 00:07:51.856 ===================================================== 00:07:51.856 Reservations: Not Supported 00:07:51.856 ===================================================== 00:07:51.856 NVMe Controller at PCI bus 0, device 18, function 0 00:07:51.856 ===================================================== 00:07:51.856 Reservations: Not Supported 00:07:51.856 Reservation test passed 00:07:51.856 00:07:51.856 real 0m0.226s 00:07:51.856 user 0m0.070s 00:07:51.856 sys 0m0.109s 00:07:51.856 09:36:19 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.856 ************************************ 00:07:51.856 END TEST nvme_reserve 00:07:51.856 09:36:19 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:51.856 ************************************ 00:07:51.856 09:36:19 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:51.856 09:36:19 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.856 09:36:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.856 09:36:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.856 ************************************ 00:07:51.856 START TEST nvme_err_injection 00:07:51.856 ************************************ 00:07:51.856 09:36:19 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:52.120 NVMe Error Injection test 00:07:52.120 Attached to 0000:00:13.0 00:07:52.120 Attached to 0000:00:10.0 00:07:52.120 Attached to 0000:00:11.0 00:07:52.120 Attached to 0000:00:12.0 00:07:52.120 0000:00:13.0: get features failed as expected 00:07:52.120 0000:00:10.0: get features failed as expected 00:07:52.120 0000:00:11.0: get features failed as expected 00:07:52.120 0000:00:12.0: get features failed as expected 00:07:52.120 0000:00:13.0: get features successfully as expected 00:07:52.120 0000:00:10.0: get features successfully as expected 00:07:52.120 0000:00:11.0: get features successfully as expected 00:07:52.120 0000:00:12.0: get features successfully as expected 00:07:52.120 0000:00:12.0: read failed as expected 00:07:52.120 0000:00:13.0: read failed as expected 00:07:52.120 0000:00:10.0: read failed as expected 00:07:52.120 0000:00:11.0: read failed as expected 00:07:52.121 0000:00:11.0: read successfully as expected 00:07:52.121 0000:00:12.0: read successfully as expected 00:07:52.121 0000:00:13.0: read successfully as expected 00:07:52.121 0000:00:10.0: read successfully as expected 00:07:52.121 Cleaning up... 00:07:52.121 00:07:52.121 real 0m0.246s 00:07:52.121 user 0m0.077s 00:07:52.121 sys 0m0.113s 00:07:52.121 09:36:19 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.121 09:36:19 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:52.121 ************************************ 00:07:52.121 END TEST nvme_err_injection 00:07:52.121 ************************************ 00:07:52.121 09:36:19 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:52.121 09:36:19 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:07:52.121 09:36:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.121 09:36:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.121 ************************************ 00:07:52.121 START TEST nvme_overhead 00:07:52.121 ************************************ 00:07:52.121 09:36:19 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:53.497 Initializing NVMe Controllers 00:07:53.497 Attached to 0000:00:13.0 00:07:53.497 Attached to 0000:00:10.0 00:07:53.497 Attached to 0000:00:11.0 00:07:53.497 Attached to 0000:00:12.0 00:07:53.497 Initialization complete. Launching workers. 00:07:53.497 submit (in ns) avg, min, max = 11468.3, 10087.7, 325466.2 00:07:53.497 complete (in ns) avg, min, max = 7857.1, 7206.9, 191428.5 00:07:53.497 00:07:53.497 Submit histogram 00:07:53.497 ================ 00:07:53.497 Range in us Cumulative Count 00:07:53.497 10.043 - 10.092: 0.0059% ( 1) 00:07:53.497 10.240 - 10.289: 0.0117% ( 1) 00:07:53.497 10.388 - 10.437: 0.0176% ( 1) 00:07:53.497 10.831 - 10.880: 0.0352% ( 3) 00:07:53.497 10.880 - 10.929: 0.2345% ( 34) 00:07:53.497 10.929 - 10.978: 1.8646% ( 278) 00:07:53.497 10.978 - 11.028: 6.8484% ( 850) 00:07:53.497 11.028 - 11.077: 16.3882% ( 1627) 00:07:53.497 11.077 - 11.126: 27.5989% ( 1912) 00:07:53.497 11.126 - 11.175: 38.3583% ( 1835) 00:07:53.497 11.175 - 11.225: 47.1885% ( 1506) 00:07:53.497 11.225 - 11.274: 54.0721% ( 1174) 00:07:53.497 11.274 - 11.323: 60.7271% ( 1135) 00:07:53.497 11.323 - 11.372: 67.9332% ( 1229) 00:07:53.497 11.372 - 11.422: 74.8461% ( 1179) 00:07:53.497 11.422 - 11.471: 80.1700% ( 908) 00:07:53.497 11.471 - 11.520: 84.1513% ( 679) 00:07:53.497 11.520 - 11.569: 87.0712% ( 498) 00:07:53.497 11.569 - 11.618: 89.0472% ( 337) 00:07:53.497 11.618 - 11.668: 90.6303% ( 270) 00:07:53.497 11.668 - 11.717: 91.6095% ( 167) 00:07:53.497 11.717 - 11.766: 92.4186% ( 138) 00:07:53.497 11.766 - 11.815: 92.9757% ( 95) 00:07:53.497 11.815 - 11.865: 93.4154% ( 75) 00:07:53.497 11.865 - 11.914: 93.8317% ( 71) 00:07:53.497 11.914 - 11.963: 94.2773% ( 76) 00:07:53.497 11.963 - 12.012: 94.6819% ( 69) 00:07:53.497 12.012 - 12.062: 95.0748% ( 67) 00:07:53.497 12.062 - 12.111: 95.4383% ( 62) 00:07:53.497 12.111 - 12.160: 95.7432% ( 52) 00:07:53.497 12.160 - 12.209: 96.0598% ( 54) 00:07:53.497 12.209 - 12.258: 96.3119% ( 43) 00:07:53.497 12.258 - 12.308: 96.5113% ( 34) 00:07:53.497 12.308 - 12.357: 96.6286% ( 20) 00:07:53.497 12.357 - 12.406: 96.7341% ( 18) 00:07:53.497 12.406 - 12.455: 96.8045% ( 12) 00:07:53.497 12.455 - 12.505: 96.8220% ( 3) 00:07:53.497 12.505 - 12.554: 96.8807% ( 10) 00:07:53.497 12.554 - 12.603: 96.9276% ( 8) 00:07:53.497 12.603 - 12.702: 96.9628% ( 6) 00:07:53.497 12.702 - 12.800: 96.9921% ( 5) 00:07:53.497 12.800 - 12.898: 97.0449% ( 9) 00:07:53.497 12.898 - 12.997: 97.0742% ( 5) 00:07:53.497 12.997 - 13.095: 97.1797% ( 18) 00:07:53.497 13.095 - 13.194: 97.3204% ( 24) 00:07:53.497 13.194 - 13.292: 97.4729% ( 26) 00:07:53.497 13.292 - 13.391: 97.6253% ( 26) 00:07:53.497 13.391 - 13.489: 97.7485% ( 21) 00:07:53.497 13.489 - 13.588: 97.8130% ( 11) 00:07:53.497 13.588 - 13.686: 97.8540% ( 7) 00:07:53.497 13.686 - 13.785: 97.9009% ( 8) 00:07:53.497 13.785 - 13.883: 97.9478% ( 8) 00:07:53.497 13.883 - 13.982: 97.9654% ( 3) 00:07:53.497 13.982 - 14.080: 97.9713% ( 1) 00:07:53.497 14.080 - 14.178: 98.0123% ( 7) 00:07:53.497 14.178 - 14.277: 98.0299% ( 3) 00:07:53.497 14.277 - 14.375: 98.0475% ( 3) 00:07:53.497 14.375 - 14.474: 98.0534% ( 1) 00:07:53.497 14.474 - 14.572: 98.1003% ( 8) 00:07:53.497 14.572 - 14.671: 98.1179% ( 3) 00:07:53.497 14.671 - 14.769: 98.1472% ( 5) 00:07:53.497 14.769 - 14.868: 98.1882% ( 7) 00:07:53.497 14.868 - 14.966: 98.2293% ( 7) 00:07:53.497 14.966 - 15.065: 98.2586% ( 5) 00:07:53.497 15.065 - 15.163: 98.2996% ( 7) 00:07:53.497 15.163 - 15.262: 98.3055% ( 1) 00:07:53.497 15.262 - 15.360: 98.3348% ( 5) 00:07:53.497 15.360 - 15.458: 98.3583% ( 4) 00:07:53.497 15.458 - 15.557: 98.3817% ( 4) 00:07:53.497 15.557 - 15.655: 98.3934% ( 2) 00:07:53.497 15.655 - 15.754: 98.4052% ( 2) 00:07:53.497 15.852 - 15.951: 98.4227% ( 3) 00:07:53.497 16.049 - 16.148: 98.4345% ( 2) 00:07:53.497 16.148 - 16.246: 98.4403% ( 1) 00:07:53.497 16.246 - 16.345: 98.4462% ( 1) 00:07:53.497 16.345 - 16.443: 98.4697% ( 4) 00:07:53.497 16.443 - 16.542: 98.4872% ( 3) 00:07:53.497 16.542 - 16.640: 98.4990% ( 2) 00:07:53.497 16.640 - 16.738: 98.5107% ( 2) 00:07:53.497 16.738 - 16.837: 98.5224% ( 2) 00:07:53.497 16.837 - 16.935: 98.5635% ( 7) 00:07:53.497 16.935 - 17.034: 98.6045% ( 7) 00:07:53.497 17.034 - 17.132: 98.6397% ( 6) 00:07:53.497 17.132 - 17.231: 98.7101% ( 12) 00:07:53.497 17.231 - 17.329: 98.7804% ( 12) 00:07:53.497 17.329 - 17.428: 98.8391% ( 10) 00:07:53.497 17.428 - 17.526: 98.9329% ( 16) 00:07:53.497 17.526 - 17.625: 99.0150% ( 14) 00:07:53.497 17.625 - 17.723: 99.0619% ( 8) 00:07:53.497 17.723 - 17.822: 99.1088% ( 8) 00:07:53.497 17.822 - 17.920: 99.1439% ( 6) 00:07:53.497 17.920 - 18.018: 99.2495% ( 18) 00:07:53.497 18.018 - 18.117: 99.3198% ( 12) 00:07:53.497 18.117 - 18.215: 99.3785% ( 10) 00:07:53.497 18.215 - 18.314: 99.4195% ( 7) 00:07:53.497 18.314 - 18.412: 99.4723% ( 9) 00:07:53.497 18.412 - 18.511: 99.5309% ( 10) 00:07:53.497 18.511 - 18.609: 99.5544% ( 4) 00:07:53.497 18.609 - 18.708: 99.5720% ( 3) 00:07:53.497 18.708 - 18.806: 99.5954% ( 4) 00:07:53.497 18.806 - 18.905: 99.6189% ( 4) 00:07:53.497 18.905 - 19.003: 99.6482% ( 5) 00:07:53.497 19.003 - 19.102: 99.6951% ( 8) 00:07:53.497 19.102 - 19.200: 99.7244% ( 5) 00:07:53.497 19.397 - 19.495: 99.7361% ( 2) 00:07:53.497 19.495 - 19.594: 99.7537% ( 3) 00:07:53.497 19.594 - 19.692: 99.7596% ( 1) 00:07:53.497 19.988 - 20.086: 99.7772% ( 3) 00:07:53.497 20.086 - 20.185: 99.7889% ( 2) 00:07:53.497 20.185 - 20.283: 99.7948% ( 1) 00:07:53.497 20.283 - 20.382: 99.8006% ( 1) 00:07:53.497 20.382 - 20.480: 99.8065% ( 1) 00:07:53.497 20.480 - 20.578: 99.8124% ( 1) 00:07:53.497 20.578 - 20.677: 99.8182% ( 1) 00:07:53.497 20.874 - 20.972: 99.8241% ( 1) 00:07:53.497 20.972 - 21.071: 99.8300% ( 1) 00:07:53.497 21.169 - 21.268: 99.8417% ( 2) 00:07:53.497 21.268 - 21.366: 99.8476% ( 1) 00:07:53.497 21.366 - 21.465: 99.8651% ( 3) 00:07:53.497 21.858 - 21.957: 99.8710% ( 1) 00:07:53.497 22.154 - 22.252: 99.8827% ( 2) 00:07:53.497 22.351 - 22.449: 99.8886% ( 1) 00:07:53.497 22.942 - 23.040: 99.8945% ( 1) 00:07:53.497 23.335 - 23.434: 99.9003% ( 1) 00:07:53.497 23.631 - 23.729: 99.9062% ( 1) 00:07:53.497 23.729 - 23.828: 99.9120% ( 1) 00:07:53.497 23.926 - 24.025: 99.9179% ( 1) 00:07:53.497 24.911 - 25.009: 99.9238% ( 1) 00:07:53.497 25.600 - 25.797: 99.9355% ( 2) 00:07:53.497 26.585 - 26.782: 99.9414% ( 1) 00:07:53.497 28.160 - 28.357: 99.9472% ( 1) 00:07:53.497 30.129 - 30.326: 99.9531% ( 1) 00:07:53.497 30.720 - 30.917: 99.9590% ( 1) 00:07:53.497 40.763 - 40.960: 99.9648% ( 1) 00:07:53.497 49.428 - 49.625: 99.9707% ( 1) 00:07:53.497 49.625 - 49.822: 99.9765% ( 1) 00:07:53.497 51.594 - 51.988: 99.9824% ( 1) 00:07:53.497 59.471 - 59.865: 99.9883% ( 1) 00:07:53.497 63.803 - 64.197: 99.9941% ( 1) 00:07:53.497 324.529 - 326.105: 100.0000% ( 1) 00:07:53.497 00:07:53.497 Complete histogram 00:07:53.498 ================== 00:07:53.498 Range in us Cumulative Count 00:07:53.498 7.188 - 7.237: 0.0117% ( 2) 00:07:53.498 7.237 - 7.286: 0.1583% ( 25) 00:07:53.498 7.286 - 7.335: 1.2665% ( 189) 00:07:53.498 7.335 - 7.385: 5.3709% ( 700) 00:07:53.498 7.385 - 7.434: 14.3594% ( 1533) 00:07:53.498 7.434 - 7.483: 28.0973% ( 2343) 00:07:53.498 7.483 - 7.532: 43.5415% ( 2634) 00:07:53.498 7.532 - 7.582: 53.7965% ( 1749) 00:07:53.498 7.582 - 7.631: 59.5192% ( 976) 00:07:53.498 7.631 - 7.680: 62.1284% ( 445) 00:07:53.498 7.680 - 7.729: 63.4653% ( 228) 00:07:53.498 7.729 - 7.778: 63.9461% ( 82) 00:07:53.498 7.778 - 7.828: 64.3272% ( 65) 00:07:53.498 7.828 - 7.877: 65.2360% ( 155) 00:07:53.498 7.877 - 7.926: 67.7279% ( 425) 00:07:53.498 7.926 - 7.975: 71.6330% ( 666) 00:07:53.498 7.975 - 8.025: 76.2298% ( 784) 00:07:53.498 8.025 - 8.074: 81.0378% ( 820) 00:07:53.498 8.074 - 8.123: 85.5057% ( 762) 00:07:53.498 8.123 - 8.172: 88.9534% ( 588) 00:07:53.498 8.172 - 8.222: 91.6505% ( 460) 00:07:53.498 8.222 - 8.271: 93.6734% ( 345) 00:07:53.498 8.271 - 8.320: 94.8871% ( 207) 00:07:53.498 8.320 - 8.369: 95.6670% ( 133) 00:07:53.498 8.369 - 8.418: 96.2592% ( 101) 00:07:53.498 8.418 - 8.468: 96.7341% ( 81) 00:07:53.498 8.468 - 8.517: 97.0507% ( 54) 00:07:53.498 8.517 - 8.566: 97.2735% ( 38) 00:07:53.498 8.566 - 8.615: 97.3849% ( 19) 00:07:53.498 8.615 - 8.665: 97.4729% ( 15) 00:07:53.498 8.665 - 8.714: 97.5081% ( 6) 00:07:53.498 8.714 - 8.763: 97.5960% ( 15) 00:07:53.498 8.763 - 8.812: 97.6722% ( 13) 00:07:53.498 8.812 - 8.862: 97.7074% ( 6) 00:07:53.498 8.862 - 8.911: 97.7426% ( 6) 00:07:53.498 8.911 - 8.960: 97.8130% ( 12) 00:07:53.498 8.960 - 9.009: 97.8481% ( 6) 00:07:53.498 9.009 - 9.058: 97.8657% ( 3) 00:07:53.498 9.058 - 9.108: 97.8892% ( 4) 00:07:53.498 9.108 - 9.157: 97.9009% ( 2) 00:07:53.498 9.157 - 9.206: 97.9068% ( 1) 00:07:53.498 9.206 - 9.255: 97.9185% ( 2) 00:07:53.498 9.255 - 9.305: 97.9302% ( 2) 00:07:53.498 9.305 - 9.354: 97.9361% ( 1) 00:07:53.498 9.354 - 9.403: 97.9478% ( 2) 00:07:53.498 9.502 - 9.551: 97.9595% ( 2) 00:07:53.498 9.600 - 9.649: 97.9771% ( 3) 00:07:53.498 9.649 - 9.698: 97.9830% ( 1) 00:07:53.498 9.748 - 9.797: 98.0064% ( 4) 00:07:53.498 9.797 - 9.846: 98.0123% ( 1) 00:07:53.498 9.846 - 9.895: 98.0182% ( 1) 00:07:53.498 9.895 - 9.945: 98.0299% ( 2) 00:07:53.498 9.945 - 9.994: 98.0358% ( 1) 00:07:53.498 9.994 - 10.043: 98.0475% ( 2) 00:07:53.498 10.043 - 10.092: 98.0592% ( 2) 00:07:53.498 10.092 - 10.142: 98.0709% ( 2) 00:07:53.498 10.142 - 10.191: 98.0944% ( 4) 00:07:53.498 10.191 - 10.240: 98.1061% ( 2) 00:07:53.498 10.240 - 10.289: 98.1296% ( 4) 00:07:53.498 10.289 - 10.338: 98.1530% ( 4) 00:07:53.498 10.388 - 10.437: 98.1765% ( 4) 00:07:53.498 10.437 - 10.486: 98.1824% ( 1) 00:07:53.498 10.486 - 10.535: 98.1882% ( 1) 00:07:53.498 10.535 - 10.585: 98.2058% ( 3) 00:07:53.498 10.585 - 10.634: 98.2175% ( 2) 00:07:53.498 10.683 - 10.732: 98.2293% ( 2) 00:07:53.498 10.880 - 10.929: 98.2351% ( 1) 00:07:53.498 11.126 - 11.175: 98.2468% ( 2) 00:07:53.498 11.175 - 11.225: 98.2586% ( 2) 00:07:53.498 11.274 - 11.323: 98.2644% ( 1) 00:07:53.498 11.422 - 11.471: 98.2703% ( 1) 00:07:53.498 11.471 - 11.520: 98.2762% ( 1) 00:07:53.498 11.520 - 11.569: 98.2820% ( 1) 00:07:53.498 11.569 - 11.618: 98.2996% ( 3) 00:07:53.498 11.618 - 11.668: 98.3055% ( 1) 00:07:53.498 11.717 - 11.766: 98.3113% ( 1) 00:07:53.498 11.815 - 11.865: 98.3231% ( 2) 00:07:53.498 11.865 - 11.914: 98.3348% ( 2) 00:07:53.498 11.914 - 11.963: 98.3465% ( 2) 00:07:53.498 12.012 - 12.062: 98.3641% ( 3) 00:07:53.498 12.062 - 12.111: 98.3700% ( 1) 00:07:53.498 12.111 - 12.160: 98.3934% ( 4) 00:07:53.498 12.209 - 12.258: 98.4110% ( 3) 00:07:53.498 12.258 - 12.308: 98.4169% ( 1) 00:07:53.498 12.308 - 12.357: 98.4227% ( 1) 00:07:53.498 12.406 - 12.455: 98.4345% ( 2) 00:07:53.498 12.455 - 12.505: 98.4403% ( 1) 00:07:53.498 12.603 - 12.702: 98.4462% ( 1) 00:07:53.498 12.800 - 12.898: 98.4814% ( 6) 00:07:53.498 12.898 - 12.997: 98.4872% ( 1) 00:07:53.498 12.997 - 13.095: 98.4931% ( 1) 00:07:53.498 13.095 - 13.194: 98.5048% ( 2) 00:07:53.498 13.194 - 13.292: 98.5342% ( 5) 00:07:53.498 13.292 - 13.391: 98.5459% ( 2) 00:07:53.498 13.391 - 13.489: 98.5693% ( 4) 00:07:53.498 13.489 - 13.588: 98.6162% ( 8) 00:07:53.498 13.588 - 13.686: 98.6690% ( 9) 00:07:53.498 13.686 - 13.785: 98.7101% ( 7) 00:07:53.498 13.785 - 13.883: 98.7511% ( 7) 00:07:53.498 13.883 - 13.982: 98.8097% ( 10) 00:07:53.498 13.982 - 14.080: 98.8742% ( 11) 00:07:53.498 14.080 - 14.178: 98.9622% ( 15) 00:07:53.498 14.178 - 14.277: 99.0208% ( 10) 00:07:53.498 14.277 - 14.375: 99.1088% ( 15) 00:07:53.498 14.375 - 14.474: 99.1674% ( 10) 00:07:53.498 14.474 - 14.572: 99.2202% ( 9) 00:07:53.498 14.572 - 14.671: 99.2905% ( 12) 00:07:53.498 14.671 - 14.769: 99.3668% ( 13) 00:07:53.498 14.769 - 14.868: 99.4313% ( 11) 00:07:53.498 14.868 - 14.966: 99.4782% ( 8) 00:07:53.498 14.966 - 15.065: 99.5075% ( 5) 00:07:53.498 15.065 - 15.163: 99.5427% ( 6) 00:07:53.498 15.163 - 15.262: 99.5720% ( 5) 00:07:53.498 15.262 - 15.360: 99.5954% ( 4) 00:07:53.498 15.360 - 15.458: 99.6306% ( 6) 00:07:53.498 15.458 - 15.557: 99.6541% ( 4) 00:07:53.498 15.557 - 15.655: 99.6599% ( 1) 00:07:53.498 15.754 - 15.852: 99.6717% ( 2) 00:07:53.498 15.852 - 15.951: 99.6892% ( 3) 00:07:53.498 15.951 - 16.049: 99.7244% ( 6) 00:07:53.498 16.049 - 16.148: 99.7303% ( 1) 00:07:53.498 16.148 - 16.246: 99.7655% ( 6) 00:07:53.498 16.246 - 16.345: 99.7772% ( 2) 00:07:53.498 16.345 - 16.443: 99.7831% ( 1) 00:07:53.498 16.542 - 16.640: 99.8065% ( 4) 00:07:53.498 16.640 - 16.738: 99.8182% ( 2) 00:07:53.498 16.738 - 16.837: 99.8300% ( 2) 00:07:53.498 16.935 - 17.034: 99.8417% ( 2) 00:07:53.498 17.132 - 17.231: 99.8476% ( 1) 00:07:53.498 17.231 - 17.329: 99.8534% ( 1) 00:07:53.498 17.428 - 17.526: 99.8593% ( 1) 00:07:53.498 17.526 - 17.625: 99.8651% ( 1) 00:07:53.498 17.625 - 17.723: 99.8769% ( 2) 00:07:53.498 17.723 - 17.822: 99.8827% ( 1) 00:07:53.498 17.920 - 18.018: 99.8886% ( 1) 00:07:53.498 18.215 - 18.314: 99.8945% ( 1) 00:07:53.498 18.314 - 18.412: 99.9003% ( 1) 00:07:53.498 18.412 - 18.511: 99.9062% ( 1) 00:07:53.498 18.905 - 19.003: 99.9120% ( 1) 00:07:53.498 19.102 - 19.200: 99.9179% ( 1) 00:07:53.498 19.397 - 19.495: 99.9238% ( 1) 00:07:53.498 19.692 - 19.791: 99.9296% ( 1) 00:07:53.498 21.465 - 21.563: 99.9355% ( 1) 00:07:53.498 21.563 - 21.662: 99.9414% ( 1) 00:07:53.498 23.040 - 23.138: 99.9472% ( 1) 00:07:53.498 23.237 - 23.335: 99.9531% ( 1) 00:07:53.498 23.335 - 23.434: 99.9590% ( 1) 00:07:53.498 24.714 - 24.812: 99.9648% ( 1) 00:07:53.498 31.705 - 31.902: 99.9707% ( 1) 00:07:53.498 36.234 - 36.431: 99.9765% ( 1) 00:07:53.498 50.412 - 50.806: 99.9824% ( 1) 00:07:53.498 56.714 - 57.108: 99.9883% ( 1) 00:07:53.498 78.769 - 79.163: 99.9941% ( 1) 00:07:53.498 191.409 - 192.197: 100.0000% ( 1) 00:07:53.498 00:07:53.498 00:07:53.498 real 0m1.242s 00:07:53.498 user 0m1.073s 00:07:53.498 sys 0m0.113s 00:07:53.498 09:36:20 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.498 09:36:20 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:53.498 ************************************ 00:07:53.498 END TEST nvme_overhead 00:07:53.498 ************************************ 00:07:53.498 09:36:20 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:53.498 09:36:20 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:53.498 09:36:20 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.498 09:36:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.498 ************************************ 00:07:53.498 START TEST nvme_arbitration 00:07:53.498 ************************************ 00:07:53.498 09:36:20 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:56.917 Initializing NVMe Controllers 00:07:56.917 Attached to 0000:00:13.0 00:07:56.917 Attached to 0000:00:10.0 00:07:56.917 Attached to 0000:00:11.0 00:07:56.917 Attached to 0000:00:12.0 00:07:56.917 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:56.917 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:56.917 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:56.917 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:56.917 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:56.917 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:56.917 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:56.917 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:56.917 Initialization complete. Launching workers. 00:07:56.917 Starting thread on core 1 with urgent priority queue 00:07:56.917 Starting thread on core 2 with urgent priority queue 00:07:56.917 Starting thread on core 3 with urgent priority queue 00:07:56.917 Starting thread on core 0 with urgent priority queue 00:07:56.917 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:56.917 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:56.917 QEMU NVMe Ctrl (12340 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:07:56.917 QEMU NVMe Ctrl (12342 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:07:56.917 QEMU NVMe Ctrl (12341 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:56.917 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:56.917 ======================================================== 00:07:56.917 00:07:56.917 00:07:56.917 real 0m3.311s 00:07:56.917 user 0m9.201s 00:07:56.917 sys 0m0.123s 00:07:56.917 09:36:24 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.917 09:36:24 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:56.917 ************************************ 00:07:56.917 END TEST nvme_arbitration 00:07:56.917 ************************************ 00:07:56.917 09:36:24 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:56.917 09:36:24 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:56.917 09:36:24 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.917 09:36:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.917 ************************************ 00:07:56.917 START TEST nvme_single_aen 00:07:56.917 ************************************ 00:07:56.917 09:36:24 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:56.917 Asynchronous Event Request test 00:07:56.917 Attached to 0000:00:13.0 00:07:56.917 Attached to 0000:00:10.0 00:07:56.917 Attached to 0000:00:11.0 00:07:56.917 Attached to 0000:00:12.0 00:07:56.917 Reset controller to setup AER completions for this process 00:07:56.917 Registering asynchronous event callbacks... 00:07:56.917 Getting orig temperature thresholds of all controllers 00:07:56.917 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.917 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.917 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.917 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:56.917 Setting all controllers temperature threshold low to trigger AER 00:07:56.917 Waiting for all controllers temperature threshold to be set lower 00:07:56.917 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.917 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:56.917 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.917 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:56.917 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.917 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:56.917 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:56.917 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:56.917 Waiting for all controllers to trigger AER and reset threshold 00:07:56.917 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.917 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.917 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.917 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.917 Cleaning up... 00:07:56.917 00:07:56.917 real 0m0.218s 00:07:56.917 user 0m0.076s 00:07:56.917 sys 0m0.100s 00:07:56.917 ************************************ 00:07:56.917 END TEST nvme_single_aen 00:07:56.917 ************************************ 00:07:56.917 09:36:24 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.917 09:36:24 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:56.917 09:36:24 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:56.917 09:36:24 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:56.917 09:36:24 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.917 09:36:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.917 ************************************ 00:07:56.917 START TEST nvme_doorbell_aers 00:07:56.917 ************************************ 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:56.917 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.177 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:57.177 09:36:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:57.177 09:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:57.177 09:36:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:57.177 [2024-11-07 09:36:24.815537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:07.162 Executing: test_write_invalid_db 00:08:07.162 Waiting for AER completion... 00:08:07.162 Failure: test_write_invalid_db 00:08:07.162 00:08:07.162 Executing: test_invalid_db_write_overflow_sq 00:08:07.162 Waiting for AER completion... 00:08:07.162 Failure: test_invalid_db_write_overflow_sq 00:08:07.162 00:08:07.162 Executing: test_invalid_db_write_overflow_cq 00:08:07.162 Waiting for AER completion... 00:08:07.162 Failure: test_invalid_db_write_overflow_cq 00:08:07.162 00:08:07.162 09:36:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.162 09:36:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:07.420 [2024-11-07 09:36:34.866203] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:17.389 Executing: test_write_invalid_db 00:08:17.389 Waiting for AER completion... 00:08:17.389 Failure: test_write_invalid_db 00:08:17.389 00:08:17.389 Executing: test_invalid_db_write_overflow_sq 00:08:17.389 Waiting for AER completion... 00:08:17.389 Failure: test_invalid_db_write_overflow_sq 00:08:17.389 00:08:17.389 Executing: test_invalid_db_write_overflow_cq 00:08:17.389 Waiting for AER completion... 00:08:17.389 Failure: test_invalid_db_write_overflow_cq 00:08:17.389 00:08:17.389 09:36:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.389 09:36:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:17.389 [2024-11-07 09:36:44.880683] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:27.358 Executing: test_write_invalid_db 00:08:27.358 Waiting for AER completion... 00:08:27.358 Failure: test_write_invalid_db 00:08:27.358 00:08:27.358 Executing: test_invalid_db_write_overflow_sq 00:08:27.358 Waiting for AER completion... 00:08:27.358 Failure: test_invalid_db_write_overflow_sq 00:08:27.358 00:08:27.358 Executing: test_invalid_db_write_overflow_cq 00:08:27.358 Waiting for AER completion... 00:08:27.358 Failure: test_invalid_db_write_overflow_cq 00:08:27.358 00:08:27.358 09:36:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:27.358 09:36:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:27.358 [2024-11-07 09:36:54.927204] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 Executing: test_write_invalid_db 00:08:37.326 Waiting for AER completion... 00:08:37.326 Failure: test_write_invalid_db 00:08:37.326 00:08:37.326 Executing: test_invalid_db_write_overflow_sq 00:08:37.326 Waiting for AER completion... 00:08:37.326 Failure: test_invalid_db_write_overflow_sq 00:08:37.326 00:08:37.326 Executing: test_invalid_db_write_overflow_cq 00:08:37.326 Waiting for AER completion... 00:08:37.326 Failure: test_invalid_db_write_overflow_cq 00:08:37.326 00:08:37.326 ************************************ 00:08:37.326 END TEST nvme_doorbell_aers 00:08:37.326 ************************************ 00:08:37.326 00:08:37.326 real 0m40.187s 00:08:37.326 user 0m34.072s 00:08:37.326 sys 0m5.703s 00:08:37.326 09:37:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.326 09:37:04 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 09:37:04 nvme -- nvme/nvme.sh@97 -- # uname 00:08:37.326 09:37:04 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:37.326 09:37:04 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:37.326 09:37:04 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:37.326 09:37:04 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.326 09:37:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.326 ************************************ 00:08:37.326 START TEST nvme_multi_aen 00:08:37.326 ************************************ 00:08:37.326 09:37:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:37.326 [2024-11-07 09:37:04.955213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.955743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.955768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.957032] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.957058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.957066] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.958201] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.958335] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.958424] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.959544] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.959695] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 [2024-11-07 09:37:04.959793] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63302) is not found. Dropping the request. 00:08:37.326 Child process pid: 63824 00:08:37.585 [Child] Asynchronous Event Request test 00:08:37.585 [Child] Attached to 0000:00:13.0 00:08:37.585 [Child] Attached to 0000:00:10.0 00:08:37.585 [Child] Attached to 0000:00:11.0 00:08:37.585 [Child] Attached to 0000:00:12.0 00:08:37.585 [Child] Registering asynchronous event callbacks... 00:08:37.585 [Child] Getting orig temperature thresholds of all controllers 00:08:37.585 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:37.585 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 [Child] Cleaning up... 00:08:37.585 Asynchronous Event Request test 00:08:37.585 Attached to 0000:00:13.0 00:08:37.585 Attached to 0000:00:10.0 00:08:37.585 Attached to 0000:00:11.0 00:08:37.585 Attached to 0000:00:12.0 00:08:37.585 Reset controller to setup AER completions for this process 00:08:37.585 Registering asynchronous event callbacks... 00:08:37.585 Getting orig temperature thresholds of all controllers 00:08:37.585 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:37.585 Setting all controllers temperature threshold low to trigger AER 00:08:37.585 Waiting for all controllers temperature threshold to be set lower 00:08:37.585 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:37.585 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:37.585 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:37.585 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:37.585 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:37.585 Waiting for all controllers to trigger AER and reset threshold 00:08:37.585 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:37.585 Cleaning up... 00:08:37.585 00:08:37.585 real 0m0.444s 00:08:37.585 user 0m0.144s 00:08:37.585 sys 0m0.192s 00:08:37.585 09:37:05 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.585 09:37:05 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:37.585 ************************************ 00:08:37.585 END TEST nvme_multi_aen 00:08:37.585 ************************************ 00:08:37.844 09:37:05 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:37.844 09:37:05 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:37.844 09:37:05 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.844 09:37:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.844 ************************************ 00:08:37.844 START TEST nvme_startup 00:08:37.844 ************************************ 00:08:37.844 09:37:05 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:37.844 Initializing NVMe Controllers 00:08:37.844 Attached to 0000:00:13.0 00:08:37.844 Attached to 0000:00:10.0 00:08:37.844 Attached to 0000:00:11.0 00:08:37.844 Attached to 0000:00:12.0 00:08:37.844 Initialization complete. 00:08:37.844 Time used:143891.594 (us). 00:08:37.844 00:08:37.844 real 0m0.204s 00:08:37.844 user 0m0.063s 00:08:37.844 sys 0m0.099s 00:08:37.844 09:37:05 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.844 09:37:05 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:37.844 ************************************ 00:08:37.844 END TEST nvme_startup 00:08:37.844 ************************************ 00:08:38.103 09:37:05 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:38.103 09:37:05 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.103 09:37:05 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.103 09:37:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.103 ************************************ 00:08:38.103 START TEST nvme_multi_secondary 00:08:38.103 ************************************ 00:08:38.103 09:37:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:08:38.103 09:37:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63880 00:08:38.103 09:37:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63881 00:08:38.103 09:37:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:38.103 09:37:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:38.103 09:37:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:41.382 Initializing NVMe Controllers 00:08:41.382 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.382 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.382 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.382 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.382 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:41.382 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:41.382 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:41.382 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:41.382 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:41.382 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:41.382 Initialization complete. Launching workers. 00:08:41.382 ======================================================== 00:08:41.382 Latency(us) 00:08:41.382 Device Information : IOPS MiB/s Average min max 00:08:41.382 PCIE (0000:00:13.0) NSID 1 from core 1: 5812.06 22.70 2752.44 1029.98 9896.78 00:08:41.382 PCIE (0000:00:10.0) NSID 1 from core 1: 5812.06 22.70 2752.32 920.98 9608.23 00:08:41.382 PCIE (0000:00:11.0) NSID 1 from core 1: 5812.06 22.70 2753.39 934.07 10170.86 00:08:41.382 PCIE (0000:00:12.0) NSID 1 from core 1: 5812.06 22.70 2753.52 894.44 11564.74 00:08:41.382 PCIE (0000:00:12.0) NSID 2 from core 1: 5812.06 22.70 2753.51 986.85 9650.63 00:08:41.382 PCIE (0000:00:12.0) NSID 3 from core 1: 5812.06 22.70 2753.45 1006.40 9976.42 00:08:41.382 ======================================================== 00:08:41.382 Total : 34872.34 136.22 2753.10 894.44 11564.74 00:08:41.382 00:08:41.382 Initializing NVMe Controllers 00:08:41.382 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.382 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.382 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.382 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.382 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:41.382 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:41.382 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:41.382 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:41.382 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:41.382 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:41.382 Initialization complete. Launching workers. 00:08:41.382 ======================================================== 00:08:41.382 Latency(us) 00:08:41.382 Device Information : IOPS MiB/s Average min max 00:08:41.382 PCIE (0000:00:13.0) NSID 1 from core 2: 2010.19 7.85 7959.03 905.84 26234.00 00:08:41.382 PCIE (0000:00:10.0) NSID 1 from core 2: 2010.19 7.85 7958.95 1084.52 25268.13 00:08:41.383 PCIE (0000:00:11.0) NSID 1 from core 2: 2010.19 7.85 7960.78 1033.68 20730.25 00:08:41.383 PCIE (0000:00:12.0) NSID 1 from core 2: 2010.19 7.85 7961.17 1175.87 21118.40 00:08:41.383 PCIE (0000:00:12.0) NSID 2 from core 2: 2010.19 7.85 7963.26 1266.50 22210.16 00:08:41.383 PCIE (0000:00:12.0) NSID 3 from core 2: 2010.19 7.85 7963.78 1000.12 22873.49 00:08:41.383 ======================================================== 00:08:41.383 Total : 12061.14 47.11 7961.16 905.84 26234.00 00:08:41.383 00:08:41.383 09:37:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63880 00:08:43.282 Initializing NVMe Controllers 00:08:43.282 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.282 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.282 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.282 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.282 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.282 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.282 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.282 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.282 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.282 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.282 Initialization complete. Launching workers. 00:08:43.282 ======================================================== 00:08:43.282 Latency(us) 00:08:43.282 Device Information : IOPS MiB/s Average min max 00:08:43.282 PCIE (0000:00:13.0) NSID 1 from core 0: 10429.83 40.74 1533.69 713.98 6857.58 00:08:43.282 PCIE (0000:00:10.0) NSID 1 from core 0: 10429.83 40.74 1532.93 693.89 6569.60 00:08:43.282 PCIE (0000:00:11.0) NSID 1 from core 0: 10429.83 40.74 1533.75 712.94 5861.28 00:08:43.282 PCIE (0000:00:12.0) NSID 1 from core 0: 10429.83 40.74 1533.74 713.11 6273.78 00:08:43.282 PCIE (0000:00:12.0) NSID 2 from core 0: 10429.83 40.74 1533.73 713.78 6428.82 00:08:43.282 PCIE (0000:00:12.0) NSID 3 from core 0: 10429.83 40.74 1533.71 706.36 6723.77 00:08:43.282 ======================================================== 00:08:43.282 Total : 62578.98 244.45 1533.59 693.89 6857.58 00:08:43.282 00:08:43.282 09:37:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63881 00:08:43.282 09:37:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63950 00:08:43.282 09:37:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:43.282 09:37:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63951 00:08:43.282 09:37:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:43.282 09:37:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:46.560 Initializing NVMe Controllers 00:08:46.560 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.560 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.560 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.560 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.560 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:46.560 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:46.560 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:46.560 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:46.560 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:46.560 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:46.560 Initialization complete. Launching workers. 00:08:46.560 ======================================================== 00:08:46.560 Latency(us) 00:08:46.560 Device Information : IOPS MiB/s Average min max 00:08:46.560 PCIE (0000:00:13.0) NSID 1 from core 1: 6221.07 24.30 2571.46 839.50 7482.81 00:08:46.560 PCIE (0000:00:10.0) NSID 1 from core 1: 6221.07 24.30 2570.49 791.51 8249.16 00:08:46.560 PCIE (0000:00:11.0) NSID 1 from core 1: 6221.07 24.30 2571.61 822.57 8634.81 00:08:46.560 PCIE (0000:00:12.0) NSID 1 from core 1: 6221.07 24.30 2571.55 839.11 8691.98 00:08:46.560 PCIE (0000:00:12.0) NSID 2 from core 1: 6221.07 24.30 2571.84 830.66 8397.36 00:08:46.560 PCIE (0000:00:12.0) NSID 3 from core 1: 6226.40 24.32 2569.68 829.23 7618.63 00:08:46.560 ======================================================== 00:08:46.560 Total : 37331.73 145.83 2571.11 791.51 8691.98 00:08:46.560 00:08:46.560 Initializing NVMe Controllers 00:08:46.560 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.560 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.560 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.560 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.560 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.560 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.560 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.560 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.560 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.560 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.560 Initialization complete. Launching workers. 00:08:46.560 ======================================================== 00:08:46.560 Latency(us) 00:08:46.560 Device Information : IOPS MiB/s Average min max 00:08:46.560 PCIE (0000:00:13.0) NSID 1 from core 0: 6291.70 24.58 2542.56 724.34 6772.33 00:08:46.560 PCIE (0000:00:10.0) NSID 1 from core 0: 6291.70 24.58 2541.49 713.61 7410.11 00:08:46.560 PCIE (0000:00:11.0) NSID 1 from core 0: 6291.70 24.58 2542.40 735.84 7155.37 00:08:46.560 PCIE (0000:00:12.0) NSID 1 from core 0: 6291.70 24.58 2542.31 640.26 6665.65 00:08:46.560 PCIE (0000:00:12.0) NSID 2 from core 0: 6291.70 24.58 2542.25 627.00 7509.45 00:08:46.560 PCIE (0000:00:12.0) NSID 3 from core 0: 6291.70 24.58 2542.19 595.46 6657.30 00:08:46.560 ======================================================== 00:08:46.560 Total : 37750.22 147.46 2542.20 595.46 7509.45 00:08:46.560 00:08:48.459 Initializing NVMe Controllers 00:08:48.459 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.459 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.459 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.459 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.459 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:48.459 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:48.459 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:48.459 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:48.459 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:48.459 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:48.459 Initialization complete. Launching workers. 00:08:48.459 ======================================================== 00:08:48.459 Latency(us) 00:08:48.459 Device Information : IOPS MiB/s Average min max 00:08:48.459 PCIE (0000:00:13.0) NSID 1 from core 2: 3273.11 12.79 4887.91 802.09 29189.87 00:08:48.459 PCIE (0000:00:10.0) NSID 1 from core 2: 3273.11 12.79 4886.26 782.32 23551.81 00:08:48.459 PCIE (0000:00:11.0) NSID 1 from core 2: 3273.11 12.79 4887.74 753.67 26485.13 00:08:48.459 PCIE (0000:00:12.0) NSID 1 from core 2: 3273.11 12.79 4887.63 788.77 26172.27 00:08:48.459 PCIE (0000:00:12.0) NSID 2 from core 2: 3273.11 12.79 4887.30 713.24 27175.87 00:08:48.459 PCIE (0000:00:12.0) NSID 3 from core 2: 3273.11 12.79 4886.97 649.58 22853.22 00:08:48.459 ======================================================== 00:08:48.459 Total : 19638.66 76.71 4887.30 649.58 29189.87 00:08:48.459 00:08:48.720 ************************************ 00:08:48.720 END TEST nvme_multi_secondary 00:08:48.720 ************************************ 00:08:48.720 09:37:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63950 00:08:48.720 09:37:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63951 00:08:48.720 00:08:48.720 real 0m10.663s 00:08:48.720 user 0m18.406s 00:08:48.720 sys 0m0.659s 00:08:48.720 09:37:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:48.720 09:37:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:48.720 09:37:16 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:48.720 09:37:16 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62907 ]] 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1092 -- # kill 62907 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1093 -- # wait 62907 00:08:48.720 [2024-11-07 09:37:16.219903] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.219967] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.219993] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.220009] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.222074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.222119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.222135] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.222151] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.224174] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.224216] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.224230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.224246] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.226262] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.226309] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.226325] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 [2024-11-07 09:37:16.226343] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63823) is not found. Dropping the request. 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:08:48.720 09:37:16 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:48.720 09:37:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.720 ************************************ 00:08:48.720 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:48.720 ************************************ 00:08:48.720 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:48.982 * Looking for test storage... 00:08:48.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:48.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.982 --rc genhtml_branch_coverage=1 00:08:48.982 --rc genhtml_function_coverage=1 00:08:48.982 --rc genhtml_legend=1 00:08:48.982 --rc geninfo_all_blocks=1 00:08:48.982 --rc geninfo_unexecuted_blocks=1 00:08:48.982 00:08:48.982 ' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:48.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.982 --rc genhtml_branch_coverage=1 00:08:48.982 --rc genhtml_function_coverage=1 00:08:48.982 --rc genhtml_legend=1 00:08:48.982 --rc geninfo_all_blocks=1 00:08:48.982 --rc geninfo_unexecuted_blocks=1 00:08:48.982 00:08:48.982 ' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:48.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.982 --rc genhtml_branch_coverage=1 00:08:48.982 --rc genhtml_function_coverage=1 00:08:48.982 --rc genhtml_legend=1 00:08:48.982 --rc geninfo_all_blocks=1 00:08:48.982 --rc geninfo_unexecuted_blocks=1 00:08:48.982 00:08:48.982 ' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:48.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.982 --rc genhtml_branch_coverage=1 00:08:48.982 --rc genhtml_function_coverage=1 00:08:48.982 --rc genhtml_legend=1 00:08:48.982 --rc geninfo_all_blocks=1 00:08:48.982 --rc geninfo_unexecuted_blocks=1 00:08:48.982 00:08:48.982 ' 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:48.982 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64108 00:08:48.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64108 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 64108 ']' 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.983 09:37:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:49.244 [2024-11-07 09:37:16.653511] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:49.244 [2024-11-07 09:37:16.653782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64108 ] 00:08:49.244 [2024-11-07 09:37:16.827679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.503 [2024-11-07 09:37:16.960673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.503 [2024-11-07 09:37:16.960827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.503 [2024-11-07 09:37:16.961381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.503 [2024-11-07 09:37:16.961407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.070 nvme0n1 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_q8weO.txt 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.070 true 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:50.070 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730972237 00:08:50.071 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64137 00:08:50.071 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.071 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:50.071 09:37:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.601 [2024-11-07 09:37:19.666896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:52.601 [2024-11-07 09:37:19.667147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:52.601 [2024-11-07 09:37:19.667167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:52.601 [2024-11-07 09:37:19.667178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:52.601 [2024-11-07 09:37:19.669077] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:52.601 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64137 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64137 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64137 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_q8weO.txt 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:52.601 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_q8weO.txt 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64108 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 64108 ']' 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 64108 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64108 00:08:52.602 killing process with pid 64108 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64108' 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 64108 00:08:52.602 09:37:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 64108 00:08:53.538 09:37:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:53.538 09:37:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:53.538 00:08:53.538 real 0m4.597s 00:08:53.538 user 0m16.120s 00:08:53.538 sys 0m0.553s 00:08:53.538 09:37:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.538 ************************************ 00:08:53.538 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:53.538 ************************************ 00:08:53.538 09:37:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.538 09:37:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:53.538 09:37:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:53.538 09:37:21 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.538 09:37:21 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.538 09:37:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.538 ************************************ 00:08:53.538 START TEST nvme_fio 00:08:53.538 ************************************ 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:53.538 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:53.538 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:53.799 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:53.799 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.059 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.059 09:37:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.059 09:37:21 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:54.317 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.317 fio-3.35 00:08:54.317 Starting 1 thread 00:08:59.605 00:08:59.605 test: (groupid=0, jobs=1): err= 0: pid=64271: Thu Nov 7 09:37:27 2024 00:08:59.605 read: IOPS=20.4k, BW=79.6MiB/s (83.5MB/s)(159MiB/2001msec) 00:08:59.605 slat (usec): min=3, max=105, avg= 5.37, stdev= 2.74 00:08:59.605 clat (usec): min=241, max=10592, avg=3121.47, stdev=1082.30 00:08:59.605 lat (usec): min=247, max=10698, avg=3126.84, stdev=1083.68 00:08:59.605 clat percentiles (usec): 00:08:59.605 | 1.00th=[ 1549], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2409], 00:08:59.605 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2933], 00:08:59.605 | 70.00th=[ 3228], 80.00th=[ 3785], 90.00th=[ 4752], 95.00th=[ 5473], 00:08:59.605 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 8455], 99.95th=[ 8717], 00:08:59.605 | 99.99th=[10421] 00:08:59.605 bw ( KiB/s): min=77072, max=87128, per=100.00%, avg=83242.67, stdev=5403.50, samples=3 00:08:59.605 iops : min=19268, max=21782, avg=20810.67, stdev=1350.88, samples=3 00:08:59.605 write: IOPS=20.3k, BW=79.4MiB/s (83.2MB/s)(159MiB/2001msec); 0 zone resets 00:08:59.605 slat (nsec): min=3429, max=79731, avg=5516.42, stdev=2753.67 00:08:59.605 clat (usec): min=260, max=10515, avg=3142.23, stdev=1082.75 00:08:59.605 lat (usec): min=266, max=10531, avg=3147.75, stdev=1084.11 00:08:59.605 clat percentiles (usec): 00:08:59.605 | 1.00th=[ 1549], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2409], 00:08:59.605 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2933], 00:08:59.605 | 70.00th=[ 3261], 80.00th=[ 3818], 90.00th=[ 4817], 95.00th=[ 5538], 00:08:59.605 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 8717], 00:08:59.605 | 99.99th=[10028] 00:08:59.605 bw ( KiB/s): min=77456, max=87104, per=100.00%, avg=83269.33, stdev=5119.31, samples=3 00:08:59.605 iops : min=19364, max=21776, avg=20817.33, stdev=1279.83, samples=3 00:08:59.605 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.06% 00:08:59.606 lat (msec) : 2=3.43%, 4=78.91%, 10=17.56%, 20=0.01% 00:08:59.606 cpu : usr=99.05%, sys=0.05%, ctx=3, majf=0, minf=606 00:08:59.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:59.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.606 issued rwts: total=40769,40661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.606 00:08:59.606 Run status group 0 (all jobs): 00:08:59.606 READ: bw=79.6MiB/s (83.5MB/s), 79.6MiB/s-79.6MiB/s (83.5MB/s-83.5MB/s), io=159MiB (167MB), run=2001-2001msec 00:08:59.606 WRITE: bw=79.4MiB/s (83.2MB/s), 79.4MiB/s-79.4MiB/s (83.2MB/s-83.2MB/s), io=159MiB (167MB), run=2001-2001msec 00:08:59.867 ----------------------------------------------------- 00:08:59.867 Suppressions used: 00:08:59.867 count bytes template 00:08:59.867 1 32 /usr/src/fio/parse.c 00:08:59.867 1 8 libtcmalloc_minimal.so 00:08:59.867 ----------------------------------------------------- 00:08:59.867 00:08:59.867 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:59.867 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:59.867 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:59.867 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.128 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.128 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.397 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.397 09:37:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.397 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.398 09:37:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.689 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.689 fio-3.35 00:09:00.689 Starting 1 thread 00:09:05.982 00:09:05.982 test: (groupid=0, jobs=1): err= 0: pid=64336: Thu Nov 7 09:37:33 2024 00:09:05.982 read: IOPS=18.1k, BW=70.6MiB/s (74.0MB/s)(141MiB/2001msec) 00:09:05.982 slat (nsec): min=3860, max=73415, avg=5928.12, stdev=2907.65 00:09:05.982 clat (usec): min=326, max=9678, avg=3509.14, stdev=1135.95 00:09:05.982 lat (usec): min=331, max=9716, avg=3515.07, stdev=1137.17 00:09:05.982 clat percentiles (usec): 00:09:05.982 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2704], 00:09:05.982 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3261], 00:09:05.982 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5342], 95.00th=[ 6128], 00:09:05.982 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8160], 00:09:05.982 | 99.99th=[ 9110] 00:09:05.982 bw ( KiB/s): min=72104, max=79096, per=100.00%, avg=74888.00, stdev=3707.13, samples=3 00:09:05.982 iops : min=18026, max=19776, avg=18722.00, stdev=928.30, samples=3 00:09:05.982 write: IOPS=18.1k, BW=70.7MiB/s (74.2MB/s)(142MiB/2001msec); 0 zone resets 00:09:05.982 slat (usec): min=4, max=157, avg= 6.10, stdev= 3.15 00:09:05.982 clat (usec): min=249, max=9140, avg=3539.54, stdev=1137.53 00:09:05.982 lat (usec): min=255, max=9152, avg=3545.64, stdev=1138.75 00:09:05.982 clat percentiles (usec): 00:09:05.982 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2737], 00:09:05.982 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3097], 60.00th=[ 3261], 00:09:05.982 | 70.00th=[ 3621], 80.00th=[ 4359], 90.00th=[ 5407], 95.00th=[ 6128], 00:09:05.982 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 7963], 99.95th=[ 8291], 00:09:05.982 | 99.99th=[ 8979] 00:09:05.982 bw ( KiB/s): min=72016, max=79264, per=100.00%, avg=74920.00, stdev=3832.57, samples=3 00:09:05.982 iops : min=18004, max=19816, avg=18730.00, stdev=958.14, samples=3 00:09:05.982 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:05.982 lat (msec) : 2=0.54%, 4=75.64%, 10=23.78% 00:09:05.982 cpu : usr=98.90%, sys=0.10%, ctx=2, majf=0, minf=606 00:09:05.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.982 issued rwts: total=36169,36233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.982 00:09:05.982 Run status group 0 (all jobs): 00:09:05.982 READ: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io=141MiB (148MB), run=2001-2001msec 00:09:05.982 WRITE: bw=70.7MiB/s (74.2MB/s), 70.7MiB/s-70.7MiB/s (74.2MB/s-74.2MB/s), io=142MiB (148MB), run=2001-2001msec 00:09:06.242 ----------------------------------------------------- 00:09:06.242 Suppressions used: 00:09:06.242 count bytes template 00:09:06.242 1 32 /usr/src/fio/parse.c 00:09:06.242 1 8 libtcmalloc_minimal.so 00:09:06.242 ----------------------------------------------------- 00:09:06.242 00:09:06.242 09:37:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:06.242 09:37:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:06.242 09:37:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:06.242 09:37:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.503 09:37:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:06.503 09:37:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.764 09:37:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.764 09:37:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.764 09:37:34 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.024 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.024 fio-3.35 00:09:07.024 Starting 1 thread 00:09:12.342 00:09:12.342 test: (groupid=0, jobs=1): err= 0: pid=64393: Thu Nov 7 09:37:39 2024 00:09:12.342 read: IOPS=16.2k, BW=63.5MiB/s (66.5MB/s)(127MiB/2001msec) 00:09:12.342 slat (nsec): min=4838, max=73374, avg=6523.26, stdev=3254.06 00:09:12.342 clat (usec): min=340, max=11411, avg=3899.38, stdev=1236.67 00:09:12.342 lat (usec): min=346, max=11463, avg=3905.91, stdev=1237.98 00:09:12.342 clat percentiles (usec): 00:09:12.342 | 1.00th=[ 2212], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 3032], 00:09:12.342 | 30.00th=[ 3163], 40.00th=[ 3261], 50.00th=[ 3392], 60.00th=[ 3589], 00:09:12.342 | 70.00th=[ 4113], 80.00th=[ 4883], 90.00th=[ 5735], 95.00th=[ 6521], 00:09:12.342 | 99.00th=[ 7898], 99.50th=[ 8356], 99.90th=[ 9241], 99.95th=[10028], 00:09:12.342 | 99.99th=[10945] 00:09:12.342 bw ( KiB/s): min=60624, max=68247, per=100.00%, avg=65431.67, stdev=4183.85, samples=3 00:09:12.342 iops : min=15156, max=17061, avg=16357.67, stdev=1045.71, samples=3 00:09:12.342 write: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(127MiB/2001msec); 0 zone resets 00:09:12.342 slat (nsec): min=4967, max=97052, avg=6767.97, stdev=3661.10 00:09:12.342 clat (usec): min=283, max=10946, avg=3943.04, stdev=1257.37 00:09:12.342 lat (usec): min=289, max=10958, avg=3949.80, stdev=1258.77 00:09:12.342 clat percentiles (usec): 00:09:12.342 | 1.00th=[ 2180], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3064], 00:09:12.342 | 30.00th=[ 3195], 40.00th=[ 3294], 50.00th=[ 3425], 60.00th=[ 3621], 00:09:12.342 | 70.00th=[ 4146], 80.00th=[ 4948], 90.00th=[ 5866], 95.00th=[ 6587], 00:09:12.342 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9503], 99.95th=[10159], 00:09:12.342 | 99.99th=[10814] 00:09:12.342 bw ( KiB/s): min=60968, max=67545, per=100.00%, avg=65256.33, stdev=3716.62, samples=3 00:09:12.342 iops : min=15242, max=16886, avg=16314.00, stdev=929.08, samples=3 00:09:12.342 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:12.342 lat (msec) : 2=0.65%, 4=67.73%, 10=31.54%, 20=0.06% 00:09:12.342 cpu : usr=98.65%, sys=0.15%, ctx=3, majf=0, minf=606 00:09:12.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.342 issued rwts: total=32504,32574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.342 00:09:12.342 Run status group 0 (all jobs): 00:09:12.342 READ: bw=63.5MiB/s (66.5MB/s), 63.5MiB/s-63.5MiB/s (66.5MB/s-66.5MB/s), io=127MiB (133MB), run=2001-2001msec 00:09:12.342 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=127MiB (133MB), run=2001-2001msec 00:09:12.604 ----------------------------------------------------- 00:09:12.604 Suppressions used: 00:09:12.604 count bytes template 00:09:12.604 1 32 /usr/src/fio/parse.c 00:09:12.604 1 8 libtcmalloc_minimal.so 00:09:12.604 ----------------------------------------------------- 00:09:12.604 00:09:12.604 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.604 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:12.604 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.604 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:12.866 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.866 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.132 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.132 09:37:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:13.132 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:13.132 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:13.132 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.132 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:13.132 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.132 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.133 09:37:40 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:13.133 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.133 fio-3.35 00:09:13.133 Starting 1 thread 00:09:21.276 00:09:21.276 test: (groupid=0, jobs=1): err= 0: pid=64460: Thu Nov 7 09:37:48 2024 00:09:21.276 read: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(119MiB/2001msec) 00:09:21.276 slat (nsec): min=4816, max=80931, avg=7002.78, stdev=4111.63 00:09:21.276 clat (usec): min=400, max=12082, avg=4161.61, stdev=1410.57 00:09:21.276 lat (usec): min=405, max=12087, avg=4168.61, stdev=1412.04 00:09:21.276 clat percentiles (usec): 00:09:21.276 | 1.00th=[ 2008], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 3032], 00:09:21.276 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3621], 60.00th=[ 4080], 00:09:21.276 | 70.00th=[ 4752], 80.00th=[ 5342], 90.00th=[ 6325], 95.00th=[ 6915], 00:09:21.276 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[10421], 00:09:21.276 | 99.99th=[11994] 00:09:21.276 bw ( KiB/s): min=61096, max=64255, per=100.00%, avg=62999.67, stdev=1676.33, samples=3 00:09:21.276 iops : min=15274, max=16063, avg=15749.67, stdev=418.80, samples=3 00:09:21.276 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(119MiB/2001msec); 0 zone resets 00:09:21.276 slat (nsec): min=5016, max=74242, avg=7171.77, stdev=4110.28 00:09:21.276 clat (usec): min=391, max=11931, avg=4187.36, stdev=1405.72 00:09:21.276 lat (usec): min=397, max=11936, avg=4194.53, stdev=1407.19 00:09:21.276 clat percentiles (usec): 00:09:21.276 | 1.00th=[ 2073], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 3064], 00:09:21.276 | 30.00th=[ 3228], 40.00th=[ 3392], 50.00th=[ 3654], 60.00th=[ 4113], 00:09:21.276 | 70.00th=[ 4752], 80.00th=[ 5342], 90.00th=[ 6325], 95.00th=[ 6915], 00:09:21.276 | 99.00th=[ 8291], 99.50th=[ 8848], 99.90th=[ 9896], 99.95th=[10421], 00:09:21.276 | 99.99th=[11338] 00:09:21.276 bw ( KiB/s): min=60416, max=63920, per=100.00%, avg=62656.00, stdev=1945.23, samples=3 00:09:21.276 iops : min=15104, max=15980, avg=15664.00, stdev=486.31, samples=3 00:09:21.276 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:09:21.276 lat (msec) : 2=0.86%, 4=57.36%, 10=41.64%, 20=0.09% 00:09:21.276 cpu : usr=98.65%, sys=0.00%, ctx=3, majf=0, minf=604 00:09:21.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:21.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.276 issued rwts: total=30534,30580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.276 00:09:21.276 Run status group 0 (all jobs): 00:09:21.276 READ: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2001-2001msec 00:09:21.276 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=119MiB (125MB), run=2001-2001msec 00:09:21.276 ----------------------------------------------------- 00:09:21.276 Suppressions used: 00:09:21.276 count bytes template 00:09:21.276 1 32 /usr/src/fio/parse.c 00:09:21.276 1 8 libtcmalloc_minimal.so 00:09:21.276 ----------------------------------------------------- 00:09:21.276 00:09:21.276 09:37:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:21.276 09:37:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:21.276 00:09:21.276 real 0m27.339s 00:09:21.276 user 0m16.339s 00:09:21.276 sys 0m20.051s 00:09:21.276 09:37:48 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.276 09:37:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:21.276 ************************************ 00:09:21.276 END TEST nvme_fio 00:09:21.276 ************************************ 00:09:21.276 00:09:21.276 real 1m36.640s 00:09:21.276 user 3m36.522s 00:09:21.276 sys 0m30.825s 00:09:21.276 09:37:48 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.276 ************************************ 00:09:21.276 END TEST nvme 00:09:21.276 ************************************ 00:09:21.276 09:37:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.276 09:37:48 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:21.276 09:37:48 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:21.276 09:37:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:21.276 09:37:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.276 09:37:48 -- common/autotest_common.sh@10 -- # set +x 00:09:21.276 ************************************ 00:09:21.276 START TEST nvme_scc 00:09:21.276 ************************************ 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:21.276 * Looking for test storage... 00:09:21.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.276 09:37:48 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.276 --rc genhtml_branch_coverage=1 00:09:21.276 --rc genhtml_function_coverage=1 00:09:21.276 --rc genhtml_legend=1 00:09:21.276 --rc geninfo_all_blocks=1 00:09:21.276 --rc geninfo_unexecuted_blocks=1 00:09:21.276 00:09:21.276 ' 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.276 --rc genhtml_branch_coverage=1 00:09:21.276 --rc genhtml_function_coverage=1 00:09:21.276 --rc genhtml_legend=1 00:09:21.276 --rc geninfo_all_blocks=1 00:09:21.276 --rc geninfo_unexecuted_blocks=1 00:09:21.276 00:09:21.276 ' 00:09:21.276 09:37:48 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.276 --rc genhtml_branch_coverage=1 00:09:21.276 --rc genhtml_function_coverage=1 00:09:21.276 --rc genhtml_legend=1 00:09:21.276 --rc geninfo_all_blocks=1 00:09:21.276 --rc geninfo_unexecuted_blocks=1 00:09:21.276 00:09:21.276 ' 00:09:21.277 09:37:48 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.277 --rc genhtml_branch_coverage=1 00:09:21.277 --rc genhtml_function_coverage=1 00:09:21.277 --rc genhtml_legend=1 00:09:21.277 --rc geninfo_all_blocks=1 00:09:21.277 --rc geninfo_unexecuted_blocks=1 00:09:21.277 00:09:21.277 ' 00:09:21.277 09:37:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.277 09:37:48 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.277 09:37:48 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.277 09:37:48 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.277 09:37:48 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.277 09:37:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.277 09:37:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.277 09:37:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.277 09:37:48 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:21.277 09:37:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:21.277 09:37:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:21.277 09:37:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.277 09:37:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:21.277 09:37:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:21.277 09:37:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:21.277 09:37:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.537 Waiting for block devices as requested 00:09:21.537 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.798 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.798 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.798 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.095 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.095 09:37:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:27.095 09:37:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:27.095 09:37:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:27.095 09:37:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.095 09:37:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:27.095 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.096 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.097 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.098 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.099 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:27.100 09:37:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:27.100 09:37:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:27.100 09:37:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.100 09:37:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.100 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.101 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:27.102 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.103 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:27.104 09:37:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:27.104 09:37:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:27.104 09:37:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.104 09:37:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.104 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.105 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.106 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.107 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.108 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.109 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:27.373 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.374 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.375 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:27.376 09:37:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:27.376 09:37:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:27.376 09:37:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:27.376 09:37:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.376 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.377 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:27.378 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:27.379 09:37:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:27.379 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:27.380 09:37:54 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:27.380 09:37:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:27.380 09:37:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:27.380 09:37:54 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.211 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.472 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.472 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.472 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.472 09:37:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:28.472 09:37:56 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:28.472 09:37:56 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.472 09:37:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:28.472 ************************************ 00:09:28.472 START TEST nvme_simple_copy 00:09:28.472 ************************************ 00:09:28.472 09:37:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:28.734 Initializing NVMe Controllers 00:09:28.734 Attaching to 0000:00:10.0 00:09:28.734 Controller supports SCC. Attached to 0000:00:10.0 00:09:28.734 Namespace ID: 1 size: 6GB 00:09:28.734 Initialization complete. 00:09:28.734 00:09:28.734 Controller QEMU NVMe Ctrl (12340 ) 00:09:28.734 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:28.734 Namespace Block Size:4096 00:09:28.734 Writing LBAs 0 to 63 with Random Data 00:09:28.734 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:28.734 LBAs matching Written Data: 64 00:09:28.734 00:09:28.734 real 0m0.282s 00:09:28.734 user 0m0.108s 00:09:28.734 sys 0m0.071s 00:09:28.734 09:37:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.734 ************************************ 00:09:28.734 END TEST nvme_simple_copy 00:09:28.734 ************************************ 00:09:28.734 09:37:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:28.734 ************************************ 00:09:28.734 END TEST nvme_scc 00:09:28.734 ************************************ 00:09:28.734 00:09:28.734 real 0m7.883s 00:09:28.734 user 0m1.072s 00:09:28.734 sys 0m1.519s 00:09:28.734 09:37:56 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.734 09:37:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:28.996 09:37:56 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:28.996 09:37:56 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:28.996 09:37:56 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:28.996 09:37:56 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:28.996 09:37:56 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:28.996 09:37:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.997 09:37:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.997 09:37:56 -- common/autotest_common.sh@10 -- # set +x 00:09:28.997 ************************************ 00:09:28.997 START TEST nvme_fdp 00:09:28.997 ************************************ 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:09:28.997 * Looking for test storage... 00:09:28.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.997 --rc genhtml_branch_coverage=1 00:09:28.997 --rc genhtml_function_coverage=1 00:09:28.997 --rc genhtml_legend=1 00:09:28.997 --rc geninfo_all_blocks=1 00:09:28.997 --rc geninfo_unexecuted_blocks=1 00:09:28.997 00:09:28.997 ' 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.997 --rc genhtml_branch_coverage=1 00:09:28.997 --rc genhtml_function_coverage=1 00:09:28.997 --rc genhtml_legend=1 00:09:28.997 --rc geninfo_all_blocks=1 00:09:28.997 --rc geninfo_unexecuted_blocks=1 00:09:28.997 00:09:28.997 ' 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.997 --rc genhtml_branch_coverage=1 00:09:28.997 --rc genhtml_function_coverage=1 00:09:28.997 --rc genhtml_legend=1 00:09:28.997 --rc geninfo_all_blocks=1 00:09:28.997 --rc geninfo_unexecuted_blocks=1 00:09:28.997 00:09:28.997 ' 00:09:28.997 09:37:56 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.997 --rc genhtml_branch_coverage=1 00:09:28.997 --rc genhtml_function_coverage=1 00:09:28.997 --rc genhtml_legend=1 00:09:28.997 --rc geninfo_all_blocks=1 00:09:28.997 --rc geninfo_unexecuted_blocks=1 00:09:28.997 00:09:28.997 ' 00:09:28.997 09:37:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.997 09:37:56 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.997 09:37:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 09:37:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 09:37:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 09:37:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:28.997 09:37:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:28.997 09:37:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:28.997 09:37:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.997 09:37:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.520 Waiting for block devices as requested 00:09:29.520 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.780 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.780 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.780 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.076 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:35.076 09:38:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:35.076 09:38:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:35.076 09:38:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.076 09:38:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.076 09:38:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.076 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:35.077 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:35.078 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:35.079 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:35.080 09:38:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:35.080 09:38:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:35.080 09:38:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.081 09:38:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.081 09:38:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.081 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:35.083 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:35.085 09:38:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:35.085 09:38:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.085 09:38:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.085 09:38:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.085 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:35.090 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.092 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:35.093 09:38:02 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:35.093 09:38:02 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.093 09:38:02 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.093 09:38:02 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:35.093 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:35.356 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:35.357 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:35.358 09:38:02 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:35.358 09:38:02 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:35.359 09:38:02 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:35.359 09:38:02 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.194 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.194 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.456 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.456 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.456 09:38:03 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:36.456 09:38:03 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:36.456 09:38:03 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.456 09:38:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:36.456 ************************************ 00:09:36.456 START TEST nvme_flexible_data_placement 00:09:36.456 ************************************ 00:09:36.456 09:38:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:36.718 Initializing NVMe Controllers 00:09:36.718 Attaching to 0000:00:13.0 00:09:36.718 Controller supports FDP Attached to 0000:00:13.0 00:09:36.718 Namespace ID: 1 Endurance Group ID: 1 00:09:36.718 Initialization complete. 00:09:36.718 00:09:36.718 ================================== 00:09:36.718 == FDP tests for Namespace: #01 == 00:09:36.718 ================================== 00:09:36.718 00:09:36.718 Get Feature: FDP: 00:09:36.718 ================= 00:09:36.718 Enabled: Yes 00:09:36.718 FDP configuration Index: 0 00:09:36.718 00:09:36.718 FDP configurations log page 00:09:36.718 =========================== 00:09:36.718 Number of FDP configurations: 1 00:09:36.718 Version: 0 00:09:36.718 Size: 112 00:09:36.718 FDP Configuration Descriptor: 0 00:09:36.718 Descriptor Size: 96 00:09:36.718 Reclaim Group Identifier format: 2 00:09:36.718 FDP Volatile Write Cache: Not Present 00:09:36.718 FDP Configuration: Valid 00:09:36.719 Vendor Specific Size: 0 00:09:36.719 Number of Reclaim Groups: 2 00:09:36.719 Number of Recalim Unit Handles: 8 00:09:36.719 Max Placement Identifiers: 128 00:09:36.719 Number of Namespaces Suppprted: 256 00:09:36.719 Reclaim unit Nominal Size: 6000000 bytes 00:09:36.719 Estimated Reclaim Unit Time Limit: Not Reported 00:09:36.719 RUH Desc #000: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #001: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #002: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #003: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #004: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #005: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #006: RUH Type: Initially Isolated 00:09:36.719 RUH Desc #007: RUH Type: Initially Isolated 00:09:36.719 00:09:36.719 FDP reclaim unit handle usage log page 00:09:36.719 ====================================== 00:09:36.719 Number of Reclaim Unit Handles: 8 00:09:36.719 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:36.719 RUH Usage Desc #001: RUH Attributes: Unused 00:09:36.719 RUH Usage Desc #002: RUH Attributes: Unused 00:09:36.719 RUH Usage Desc #003: RUH Attributes: Unused 00:09:36.719 RUH Usage Desc #004: RUH Attributes: Unused 00:09:36.719 RUH Usage Desc #005: RUH Attributes: Unused 00:09:36.719 RUH Usage Desc #006: RUH Attributes: Unused 00:09:36.719 RUH Usage Desc #007: RUH Attributes: Unused 00:09:36.719 00:09:36.719 FDP statistics log page 00:09:36.719 ======================= 00:09:36.719 Host bytes with metadata written: 1054797824 00:09:36.719 Media bytes with metadata written: 1054937088 00:09:36.719 Media bytes erased: 0 00:09:36.719 00:09:36.719 FDP Reclaim unit handle status 00:09:36.719 ============================== 00:09:36.719 Number of RUHS descriptors: 2 00:09:36.719 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003211 00:09:36.719 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:36.719 00:09:36.719 FDP write on placement id: 0 success 00:09:36.719 00:09:36.719 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:36.719 00:09:36.719 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:36.719 00:09:36.719 Get Feature: FDP Events for Placement handle: #0 00:09:36.719 ======================== 00:09:36.719 Number of FDP Events: 6 00:09:36.719 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:36.719 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:36.719 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:36.719 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:36.719 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:36.719 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:36.719 00:09:36.719 FDP events log page 00:09:36.719 =================== 00:09:36.719 Number of FDP events: 1 00:09:36.719 FDP Event #0: 00:09:36.719 Event Type: RU Not Written to Capacity 00:09:36.719 Placement Identifier: Valid 00:09:36.719 NSID: Valid 00:09:36.719 Location: Valid 00:09:36.719 Placement Identifier: 0 00:09:36.719 Event Timestamp: f 00:09:36.719 Namespace Identifier: 1 00:09:36.719 Reclaim Group Identifier: 0 00:09:36.719 Reclaim Unit Handle Identifier: 0 00:09:36.719 00:09:36.719 FDP test passed 00:09:36.719 ************************************ 00:09:36.719 END TEST nvme_flexible_data_placement 00:09:36.719 ************************************ 00:09:36.719 00:09:36.719 real 0m0.260s 00:09:36.719 user 0m0.083s 00:09:36.719 sys 0m0.075s 00:09:36.719 09:38:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.719 09:38:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:36.719 ************************************ 00:09:36.719 END TEST nvme_fdp 00:09:36.719 ************************************ 00:09:36.719 00:09:36.719 real 0m7.878s 00:09:36.719 user 0m1.089s 00:09:36.719 sys 0m1.488s 00:09:36.719 09:38:04 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.719 09:38:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:36.719 09:38:04 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:36.719 09:38:04 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:36.719 09:38:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:36.719 09:38:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.719 09:38:04 -- common/autotest_common.sh@10 -- # set +x 00:09:36.719 ************************************ 00:09:36.719 START TEST nvme_rpc 00:09:36.719 ************************************ 00:09:36.719 09:38:04 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:36.981 * Looking for test storage... 00:09:36.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.981 09:38:04 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.981 09:38:04 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.981 09:38:04 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.981 09:38:04 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.981 09:38:04 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.982 09:38:04 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.982 --rc genhtml_branch_coverage=1 00:09:36.982 --rc genhtml_function_coverage=1 00:09:36.982 --rc genhtml_legend=1 00:09:36.982 --rc geninfo_all_blocks=1 00:09:36.982 --rc geninfo_unexecuted_blocks=1 00:09:36.982 00:09:36.982 ' 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.982 --rc genhtml_branch_coverage=1 00:09:36.982 --rc genhtml_function_coverage=1 00:09:36.982 --rc genhtml_legend=1 00:09:36.982 --rc geninfo_all_blocks=1 00:09:36.982 --rc geninfo_unexecuted_blocks=1 00:09:36.982 00:09:36.982 ' 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.982 --rc genhtml_branch_coverage=1 00:09:36.982 --rc genhtml_function_coverage=1 00:09:36.982 --rc genhtml_legend=1 00:09:36.982 --rc geninfo_all_blocks=1 00:09:36.982 --rc geninfo_unexecuted_blocks=1 00:09:36.982 00:09:36.982 ' 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.982 --rc genhtml_branch_coverage=1 00:09:36.982 --rc genhtml_function_coverage=1 00:09:36.982 --rc genhtml_legend=1 00:09:36.982 --rc geninfo_all_blocks=1 00:09:36.982 --rc geninfo_unexecuted_blocks=1 00:09:36.982 00:09:36.982 ' 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:36.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65821 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65821 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65821 ']' 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.982 09:38:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.982 09:38:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.244 [2024-11-07 09:38:04.685670] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:37.244 [2024-11-07 09:38:04.685813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65821 ] 00:09:37.244 [2024-11-07 09:38:04.849699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.506 [2024-11-07 09:38:04.971031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.506 [2024-11-07 09:38:04.971120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.079 09:38:05 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:38.079 09:38:05 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:38.080 09:38:05 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:38.342 Nvme0n1 00:09:38.342 09:38:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:38.342 09:38:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:38.603 request: 00:09:38.603 { 00:09:38.603 "bdev_name": "Nvme0n1", 00:09:38.603 "filename": "non_existing_file", 00:09:38.603 "method": "bdev_nvme_apply_firmware", 00:09:38.603 "req_id": 1 00:09:38.603 } 00:09:38.603 Got JSON-RPC error response 00:09:38.603 response: 00:09:38.603 { 00:09:38.603 "code": -32603, 00:09:38.603 "message": "open file failed." 00:09:38.603 } 00:09:38.603 09:38:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:38.603 09:38:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:38.603 09:38:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:38.865 09:38:06 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:38.865 09:38:06 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65821 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65821 ']' 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65821 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65821 00:09:38.865 killing process with pid 65821 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65821' 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65821 00:09:38.865 09:38:06 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65821 00:09:40.249 ************************************ 00:09:40.249 END TEST nvme_rpc 00:09:40.249 ************************************ 00:09:40.249 00:09:40.249 real 0m3.449s 00:09:40.249 user 0m6.436s 00:09:40.249 sys 0m0.621s 00:09:40.249 09:38:07 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.249 09:38:07 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.249 09:38:07 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:40.249 09:38:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:40.249 09:38:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.249 09:38:07 -- common/autotest_common.sh@10 -- # set +x 00:09:40.249 ************************************ 00:09:40.249 START TEST nvme_rpc_timeouts 00:09:40.249 ************************************ 00:09:40.249 09:38:07 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:40.511 * Looking for test storage... 00:09:40.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.511 09:38:07 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.511 09:38:07 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.511 09:38:07 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.511 09:38:08 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.511 --rc genhtml_branch_coverage=1 00:09:40.511 --rc genhtml_function_coverage=1 00:09:40.511 --rc genhtml_legend=1 00:09:40.511 --rc geninfo_all_blocks=1 00:09:40.511 --rc geninfo_unexecuted_blocks=1 00:09:40.511 00:09:40.511 ' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.511 --rc genhtml_branch_coverage=1 00:09:40.511 --rc genhtml_function_coverage=1 00:09:40.511 --rc genhtml_legend=1 00:09:40.511 --rc geninfo_all_blocks=1 00:09:40.511 --rc geninfo_unexecuted_blocks=1 00:09:40.511 00:09:40.511 ' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.511 --rc genhtml_branch_coverage=1 00:09:40.511 --rc genhtml_function_coverage=1 00:09:40.511 --rc genhtml_legend=1 00:09:40.511 --rc geninfo_all_blocks=1 00:09:40.511 --rc geninfo_unexecuted_blocks=1 00:09:40.511 00:09:40.511 ' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.511 --rc genhtml_branch_coverage=1 00:09:40.511 --rc genhtml_function_coverage=1 00:09:40.511 --rc genhtml_legend=1 00:09:40.511 --rc geninfo_all_blocks=1 00:09:40.511 --rc geninfo_unexecuted_blocks=1 00:09:40.511 00:09:40.511 ' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65888 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65888 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65920 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65920 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65920 ']' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.511 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:40.511 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:40.511 [2024-11-07 09:38:08.103582] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:40.511 [2024-11-07 09:38:08.103727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65920 ] 00:09:40.772 [2024-11-07 09:38:08.265983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.772 [2024-11-07 09:38:08.372275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.772 [2024-11-07 09:38:08.372341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.343 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:41.343 Checking default timeout settings: 00:09:41.343 09:38:08 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:09:41.343 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:41.343 09:38:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:41.914 Making settings changes with rpc: 00:09:41.914 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:41.914 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:41.914 Check default vs. modified settings: 00:09:41.914 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:41.914 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:42.183 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:42.183 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.183 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.183 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65888 00:09:42.183 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65888 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:42.184 Setting action_on_timeout is changed as expected. 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65888 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65888 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:42.184 Setting timeout_us is changed as expected. 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:42.184 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65888 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65888 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:42.185 Setting timeout_admin_us is changed as expected. 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65888 /tmp/settings_modified_65888 00:09:42.185 09:38:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65920 00:09:42.185 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65920 ']' 00:09:42.185 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65920 00:09:42.185 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:09:42.185 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:42.185 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65920 00:09:42.447 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:42.447 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:42.447 killing process with pid 65920 00:09:42.447 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65920' 00:09:42.447 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65920 00:09:42.447 09:38:09 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65920 00:09:43.827 RPC TIMEOUT SETTING TEST PASSED. 00:09:43.827 09:38:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:43.827 00:09:43.827 real 0m3.273s 00:09:43.827 user 0m6.372s 00:09:43.827 sys 0m0.488s 00:09:43.827 09:38:11 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.827 09:38:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.827 ************************************ 00:09:43.827 END TEST nvme_rpc_timeouts 00:09:43.827 ************************************ 00:09:43.827 09:38:11 -- spdk/autotest.sh@239 -- # uname -s 00:09:43.827 09:38:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:43.827 09:38:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:43.827 09:38:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:43.827 09:38:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.827 09:38:11 -- common/autotest_common.sh@10 -- # set +x 00:09:43.827 ************************************ 00:09:43.827 START TEST sw_hotplug 00:09:43.827 ************************************ 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:43.827 * Looking for test storage... 00:09:43.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.827 09:38:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.827 --rc genhtml_branch_coverage=1 00:09:43.827 --rc genhtml_function_coverage=1 00:09:43.827 --rc genhtml_legend=1 00:09:43.827 --rc geninfo_all_blocks=1 00:09:43.827 --rc geninfo_unexecuted_blocks=1 00:09:43.827 00:09:43.827 ' 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.827 --rc genhtml_branch_coverage=1 00:09:43.827 --rc genhtml_function_coverage=1 00:09:43.827 --rc genhtml_legend=1 00:09:43.827 --rc geninfo_all_blocks=1 00:09:43.827 --rc geninfo_unexecuted_blocks=1 00:09:43.827 00:09:43.827 ' 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.827 --rc genhtml_branch_coverage=1 00:09:43.827 --rc genhtml_function_coverage=1 00:09:43.827 --rc genhtml_legend=1 00:09:43.827 --rc geninfo_all_blocks=1 00:09:43.827 --rc geninfo_unexecuted_blocks=1 00:09:43.827 00:09:43.827 ' 00:09:43.827 09:38:11 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:43.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.827 --rc genhtml_branch_coverage=1 00:09:43.827 --rc genhtml_function_coverage=1 00:09:43.827 --rc genhtml_legend=1 00:09:43.827 --rc geninfo_all_blocks=1 00:09:43.827 --rc geninfo_unexecuted_blocks=1 00:09:43.827 00:09:43.827 ' 00:09:43.827 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.350 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.350 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.350 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.350 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:44.350 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:44.350 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:44.350 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:44.350 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.350 09:38:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:44.351 09:38:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:44.351 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:44.351 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:44.351 09:38:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.611 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.874 Waiting for block devices as requested 00:09:44.874 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.874 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.178 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.178 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.455 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.455 09:38:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:50.455 09:38:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:50.716 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:50.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.716 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:50.975 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:51.236 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.236 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.236 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:51.236 09:38:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66780 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:51.495 09:38:18 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:51.495 09:38:18 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:51.495 09:38:18 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:51.495 09:38:18 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:51.495 09:38:18 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:51.495 09:38:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:51.755 Initializing NVMe Controllers 00:09:51.755 Attaching to 0000:00:10.0 00:09:51.755 Attaching to 0000:00:11.0 00:09:51.755 Attached to 0000:00:10.0 00:09:51.755 Attached to 0000:00:11.0 00:09:51.755 Initialization complete. Starting I/O... 00:09:51.755 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:51.756 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:51.756 00:09:52.698 QEMU NVMe Ctrl (12340 ): 2212 I/Os completed (+2212) 00:09:52.698 QEMU NVMe Ctrl (12341 ): 2212 I/Os completed (+2212) 00:09:52.698 00:09:53.640 QEMU NVMe Ctrl (12340 ): 5044 I/Os completed (+2832) 00:09:53.640 QEMU NVMe Ctrl (12341 ): 5044 I/Os completed (+2832) 00:09:53.640 00:09:54.585 QEMU NVMe Ctrl (12340 ): 7812 I/Os completed (+2768) 00:09:54.585 QEMU NVMe Ctrl (12341 ): 7812 I/Os completed (+2768) 00:09:54.585 00:09:55.526 QEMU NVMe Ctrl (12340 ): 10588 I/Os completed (+2776) 00:09:55.526 QEMU NVMe Ctrl (12341 ): 10596 I/Os completed (+2784) 00:09:55.526 00:09:56.913 QEMU NVMe Ctrl (12340 ): 13340 I/Os completed (+2752) 00:09:56.913 QEMU NVMe Ctrl (12341 ): 13348 I/Os completed (+2752) 00:09:56.913 00:09:57.482 09:38:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.483 09:38:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.483 09:38:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.483 [2024-11-07 09:38:24.966805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:57.483 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:57.483 [2024-11-07 09:38:24.967839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.967877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.967893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.967910] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:57.483 [2024-11-07 09:38:24.969824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.969871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.969882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.969895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:57.483 EAL: Scan for (pci) bus failed. 00:09:57.483 09:38:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.483 09:38:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.483 [2024-11-07 09:38:24.991300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:57.483 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:57.483 [2024-11-07 09:38:24.992178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.992215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.992235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.992250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:57.483 [2024-11-07 09:38:24.993608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.993651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.993666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 [2024-11-07 09:38:24.993678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.483 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:57.483 EAL: Scan for (pci) bus failed. 00:09:57.483 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:57.483 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.483 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.483 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.483 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.743 Attaching to 0000:00:10.0 00:09:57.743 Attached to 0000:00:10.0 00:09:57.743 QEMU NVMe Ctrl (12340 ): 8 I/Os completed (+8) 00:09:57.743 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.743 09:38:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:57.743 Attaching to 0000:00:11.0 00:09:57.743 Attached to 0000:00:11.0 00:09:58.687 QEMU NVMe Ctrl (12340 ): 2910 I/Os completed (+2902) 00:09:58.687 QEMU NVMe Ctrl (12341 ): 2668 I/Os completed (+2668) 00:09:58.687 00:09:59.630 QEMU NVMe Ctrl (12340 ): 5712 I/Os completed (+2802) 00:09:59.630 QEMU NVMe Ctrl (12341 ): 5490 I/Os completed (+2822) 00:09:59.630 00:10:00.574 QEMU NVMe Ctrl (12340 ): 8449 I/Os completed (+2737) 00:10:00.574 QEMU NVMe Ctrl (12341 ): 8222 I/Os completed (+2732) 00:10:00.574 00:10:01.517 QEMU NVMe Ctrl (12340 ): 11285 I/Os completed (+2836) 00:10:01.517 QEMU NVMe Ctrl (12341 ): 11141 I/Os completed (+2919) 00:10:01.517 00:10:02.904 QEMU NVMe Ctrl (12340 ): 14080 I/Os completed (+2795) 00:10:02.904 QEMU NVMe Ctrl (12341 ): 13922 I/Os completed (+2781) 00:10:02.904 00:10:03.838 QEMU NVMe Ctrl (12340 ): 16928 I/Os completed (+2848) 00:10:03.838 QEMU NVMe Ctrl (12341 ): 16765 I/Os completed (+2843) 00:10:03.838 00:10:04.806 QEMU NVMe Ctrl (12340 ): 20477 I/Os completed (+3549) 00:10:04.806 QEMU NVMe Ctrl (12341 ): 20283 I/Os completed (+3518) 00:10:04.806 00:10:05.744 QEMU NVMe Ctrl (12340 ): 23540 I/Os completed (+3063) 00:10:05.745 QEMU NVMe Ctrl (12341 ): 23343 I/Os completed (+3060) 00:10:05.745 00:10:06.678 QEMU NVMe Ctrl (12340 ): 27235 I/Os completed (+3695) 00:10:06.678 QEMU NVMe Ctrl (12341 ): 27031 I/Os completed (+3688) 00:10:06.678 00:10:07.622 QEMU NVMe Ctrl (12340 ): 30513 I/Os completed (+3278) 00:10:07.622 QEMU NVMe Ctrl (12341 ): 30266 I/Os completed (+3235) 00:10:07.622 00:10:08.556 QEMU NVMe Ctrl (12340 ): 33817 I/Os completed (+3304) 00:10:08.556 QEMU NVMe Ctrl (12341 ): 33599 I/Os completed (+3333) 00:10:08.556 00:10:09.940 QEMU NVMe Ctrl (12340 ): 36821 I/Os completed (+3004) 00:10:09.940 QEMU NVMe Ctrl (12341 ): 36635 I/Os completed (+3036) 00:10:09.940 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.940 [2024-11-07 09:38:37.256200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:09.940 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.940 [2024-11-07 09:38:37.257757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.257939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.257981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.258470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.940 [2024-11-07 09:38:37.263470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.263549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.263566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.263584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.940 [2024-11-07 09:38:37.281620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:09.940 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.940 [2024-11-07 09:38:37.282814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.282881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.282906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.282922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.940 [2024-11-07 09:38:37.284887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.284942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.284960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 [2024-11-07 09:38:37.284977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.940 Attaching to 0000:00:10.0 00:10:09.940 Attached to 0000:00:10.0 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.940 09:38:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.940 Attaching to 0000:00:11.0 00:10:09.940 Attached to 0000:00:11.0 00:10:10.514 QEMU NVMe Ctrl (12340 ): 1868 I/Os completed (+1868) 00:10:10.514 QEMU NVMe Ctrl (12341 ): 1668 I/Os completed (+1668) 00:10:10.514 00:10:11.900 QEMU NVMe Ctrl (12340 ): 4644 I/Os completed (+2776) 00:10:11.900 QEMU NVMe Ctrl (12341 ): 4453 I/Os completed (+2785) 00:10:11.900 00:10:12.838 QEMU NVMe Ctrl (12340 ): 7472 I/Os completed (+2828) 00:10:12.838 QEMU NVMe Ctrl (12341 ): 7281 I/Os completed (+2828) 00:10:12.838 00:10:13.772 QEMU NVMe Ctrl (12340 ): 11122 I/Os completed (+3650) 00:10:13.772 QEMU NVMe Ctrl (12341 ): 10930 I/Os completed (+3649) 00:10:13.772 00:10:14.714 QEMU NVMe Ctrl (12340 ): 14326 I/Os completed (+3204) 00:10:14.714 QEMU NVMe Ctrl (12341 ): 14197 I/Os completed (+3267) 00:10:14.714 00:10:15.657 QEMU NVMe Ctrl (12340 ): 17301 I/Os completed (+2975) 00:10:15.657 QEMU NVMe Ctrl (12341 ): 17163 I/Os completed (+2966) 00:10:15.657 00:10:16.600 QEMU NVMe Ctrl (12340 ): 20045 I/Os completed (+2744) 00:10:16.600 QEMU NVMe Ctrl (12341 ): 19910 I/Os completed (+2747) 00:10:16.600 00:10:17.543 QEMU NVMe Ctrl (12340 ): 22709 I/Os completed (+2664) 00:10:17.543 QEMU NVMe Ctrl (12341 ): 22577 I/Os completed (+2667) 00:10:17.543 00:10:18.925 QEMU NVMe Ctrl (12340 ): 25361 I/Os completed (+2652) 00:10:18.925 QEMU NVMe Ctrl (12341 ): 25233 I/Os completed (+2656) 00:10:18.925 00:10:19.867 QEMU NVMe Ctrl (12340 ): 28021 I/Os completed (+2660) 00:10:19.867 QEMU NVMe Ctrl (12341 ): 27893 I/Os completed (+2660) 00:10:19.867 00:10:20.801 QEMU NVMe Ctrl (12340 ): 31290 I/Os completed (+3269) 00:10:20.801 QEMU NVMe Ctrl (12341 ): 31173 I/Os completed (+3280) 00:10:20.801 00:10:21.770 QEMU NVMe Ctrl (12340 ): 34946 I/Os completed (+3656) 00:10:21.770 QEMU NVMe Ctrl (12341 ): 34828 I/Os completed (+3655) 00:10:21.770 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.028 [2024-11-07 09:38:49.589019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:22.028 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:22.028 [2024-11-07 09:38:49.590084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.590200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.590231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.590290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.028 [2024-11-07 09:38:49.591963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.592099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.592129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.592225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.028 [2024-11-07 09:38:49.607291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:22.028 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:22.028 [2024-11-07 09:38:49.608258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.608355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.608407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.608432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.028 [2024-11-07 09:38:49.609892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.609979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.610046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 [2024-11-07 09:38:49.610070] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.028 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:22.028 EAL: Scan for (pci) bus failed. 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.028 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.287 Attaching to 0000:00:10.0 00:10:22.287 Attached to 0000:00:10.0 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.287 09:38:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.287 Attaching to 0000:00:11.0 00:10:22.287 Attached to 0000:00:11.0 00:10:22.287 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.287 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.287 [2024-11-07 09:38:49.845897] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:34.517 09:39:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:34.517 09:39:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.517 09:39:01 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.87 00:10:34.517 09:39:01 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.87 00:10:34.517 09:39:01 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:34.517 09:39:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.87 00:10:34.517 09:39:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 2 00:10:34.517 remove_attach_helper took 42.87s to complete (handling 2 nvme drive(s)) 09:39:01 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66780 00:10:41.104 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66780) - No such process 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66780 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67325 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.104 09:39:07 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67325 00:10:41.104 09:39:07 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67325 ']' 00:10:41.104 09:39:07 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.104 09:39:07 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.104 09:39:07 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.104 09:39:07 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.104 09:39:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.104 [2024-11-07 09:39:07.933592] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:41.104 [2024-11-07 09:39:07.934029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67325 ] 00:10:41.104 [2024-11-07 09:39:08.095131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.104 [2024-11-07 09:39:08.215595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:41.365 09:39:08 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:41.365 09:39:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.933 09:39:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 09:39:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 09:39:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.933 09:39:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.933 [2024-11-07 09:39:15.009276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:47.933 [2024-11-07 09:39:15.010468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.010505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.010517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.010535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.010543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.010552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.010559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.010567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.010585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.010591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.010599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.409266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:47.933 [2024-11-07 09:39:15.410440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.410472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.410483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.410494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.410503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.410510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.410518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.410524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.410553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 [2024-11-07 09:39:15.410560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.933 [2024-11-07 09:39:15.410568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.933 [2024-11-07 09:39:15.410575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.933 09:39:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.933 09:39:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.933 09:39:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.933 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.192 09:39:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.393 09:39:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.393 09:39:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.393 09:39:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.393 09:39:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.393 09:39:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.393 09:39:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.393 [2024-11-07 09:39:27.909470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:00.393 09:39:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.393 [2024-11-07 09:39:27.910623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.393 [2024-11-07 09:39:27.910664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.393 [2024-11-07 09:39:27.910675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.393 [2024-11-07 09:39:27.910691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.393 [2024-11-07 09:39:27.910699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.393 [2024-11-07 09:39:27.910707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.393 [2024-11-07 09:39:27.910715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.393 [2024-11-07 09:39:27.910723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.393 [2024-11-07 09:39:27.910729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.393 [2024-11-07 09:39:27.910738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.393 [2024-11-07 09:39:27.910744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.393 [2024-11-07 09:39:27.910752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.961 [2024-11-07 09:39:28.409472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:00.961 [2024-11-07 09:39:28.410600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.961 [2024-11-07 09:39:28.410640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.961 [2024-11-07 09:39:28.410653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.961 [2024-11-07 09:39:28.410664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.961 [2024-11-07 09:39:28.410674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.961 [2024-11-07 09:39:28.410680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.961 [2024-11-07 09:39:28.410688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.961 [2024-11-07 09:39:28.410695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.961 [2024-11-07 09:39:28.410702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.961 [2024-11-07 09:39:28.410709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.961 [2024-11-07 09:39:28.410717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.961 [2024-11-07 09:39:28.410723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.961 09:39:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.961 09:39:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.961 09:39:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.961 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.221 09:39:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.414 09:39:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.414 09:39:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.414 09:39:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.414 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.415 09:39:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.415 09:39:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.415 [2024-11-07 09:39:40.809678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:13.415 [2024-11-07 09:39:40.810903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.415 [2024-11-07 09:39:40.810937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.415 [2024-11-07 09:39:40.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.415 [2024-11-07 09:39:40.810965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.415 [2024-11-07 09:39:40.810972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.415 [2024-11-07 09:39:40.810982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.415 [2024-11-07 09:39:40.810989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.415 [2024-11-07 09:39:40.810997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.415 [2024-11-07 09:39:40.811003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.415 [2024-11-07 09:39:40.811011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.415 [2024-11-07 09:39:40.811018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.415 [2024-11-07 09:39:40.811026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.415 09:39:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.415 09:39:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.673 [2024-11-07 09:39:41.209675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:13.673 [2024-11-07 09:39:41.210786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.673 [2024-11-07 09:39:41.210816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.673 [2024-11-07 09:39:41.210826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.673 [2024-11-07 09:39:41.210838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.673 [2024-11-07 09:39:41.210853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.673 [2024-11-07 09:39:41.210860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.673 [2024-11-07 09:39:41.210869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.673 [2024-11-07 09:39:41.210876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.673 [2024-11-07 09:39:41.210885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.673 [2024-11-07 09:39:41.210892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.673 [2024-11-07 09:39:41.210899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.673 [2024-11-07 09:39:41.210906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.673 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:13.673 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.673 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.673 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.673 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.673 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.673 09:39:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.673 09:39:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.673 09:39:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.931 09:39:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.70 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.70 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.70 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.70 2 00:11:26.148 remove_attach_helper took 44.70s to complete (handling 2 nvme drive(s)) 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:26.148 09:39:53 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:26.148 09:39:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.702 09:39:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.702 09:39:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.702 09:39:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:32.702 09:39:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.702 [2024-11-07 09:39:59.740922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:32.702 [2024-11-07 09:39:59.741832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.702 [2024-11-07 09:39:59.741865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.702 [2024-11-07 09:39:59.741876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:39:59.741895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:39:59.741903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:39:59.741911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:39:59.741918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:39:59.741926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:39:59.741933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:39:59.741942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:39:59.741948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:39:59.741958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:40:00.140921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:32.703 [2024-11-07 09:40:00.141794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:40:00.141824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:40:00.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:40:00.141846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:40:00.141854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:40:00.141861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:40:00.141870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:40:00.141876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:40:00.141884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 [2024-11-07 09:40:00.141891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.703 [2024-11-07 09:40:00.141899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.703 [2024-11-07 09:40:00.141906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.703 09:40:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.703 09:40:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.703 09:40:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.703 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.960 09:40:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.157 09:40:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.157 09:40:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.157 09:40:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.157 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.157 [2024-11-07 09:40:12.541136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:45.157 [2024-11-07 09:40:12.542168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.157 [2024-11-07 09:40:12.542206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.157 [2024-11-07 09:40:12.542217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.158 [2024-11-07 09:40:12.542232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.158 [2024-11-07 09:40:12.542240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.158 [2024-11-07 09:40:12.542248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.158 [2024-11-07 09:40:12.542256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.158 [2024-11-07 09:40:12.542263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.158 [2024-11-07 09:40:12.542270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.158 [2024-11-07 09:40:12.542278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.158 [2024-11-07 09:40:12.542285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.158 [2024-11-07 09:40:12.542293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.158 09:40:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.158 09:40:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.158 09:40:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:45.158 09:40:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:45.416 [2024-11-07 09:40:13.041136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:45.417 [2024-11-07 09:40:13.041996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.417 [2024-11-07 09:40:13.042025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.417 [2024-11-07 09:40:13.042036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.417 [2024-11-07 09:40:13.042048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.417 [2024-11-07 09:40:13.042059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.417 [2024-11-07 09:40:13.042066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.417 [2024-11-07 09:40:13.042075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.417 [2024-11-07 09:40:13.042081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.417 [2024-11-07 09:40:13.042089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.417 [2024-11-07 09:40:13.042096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.417 [2024-11-07 09:40:13.042104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.417 [2024-11-07 09:40:13.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.675 09:40:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.675 09:40:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.675 09:40:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.675 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:45.934 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:45.934 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.934 09:40:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.130 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:58.130 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:58.130 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:58.130 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.130 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.131 09:40:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.131 09:40:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.131 09:40:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.131 [2024-11-07 09:40:25.441348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:58.131 [2024-11-07 09:40:25.442312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.131 [2024-11-07 09:40:25.442348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.131 [2024-11-07 09:40:25.442359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.131 [2024-11-07 09:40:25.442375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.131 [2024-11-07 09:40:25.442382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.131 [2024-11-07 09:40:25.442391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.131 [2024-11-07 09:40:25.442397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.131 [2024-11-07 09:40:25.442409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.131 [2024-11-07 09:40:25.442416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.131 [2024-11-07 09:40:25.442424] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.131 [2024-11-07 09:40:25.442430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.131 [2024-11-07 09:40:25.442438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.131 09:40:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.131 09:40:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.131 09:40:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:58.131 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.389 [2024-11-07 09:40:25.841348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:58.389 [2024-11-07 09:40:25.842219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.389 [2024-11-07 09:40:25.842249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.389 [2024-11-07 09:40:25.842259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.389 [2024-11-07 09:40:25.842270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.389 [2024-11-07 09:40:25.842279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.389 [2024-11-07 09:40:25.842286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.389 [2024-11-07 09:40:25.842294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.389 [2024-11-07 09:40:25.842300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.389 [2024-11-07 09:40:25.842309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.389 [2024-11-07 09:40:25.842316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.389 [2024-11-07 09:40:25.842326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.389 [2024-11-07 09:40:25.842332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.389 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:58.389 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.390 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.390 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.390 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.390 09:40:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.390 09:40:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.390 09:40:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.390 09:40:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.390 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.390 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.650 09:40:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:12:10.851 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:10.851 09:40:38 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67325 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67325 ']' 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67325 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67325 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.851 killing process with pid 67325 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67325' 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67325 00:12:10.851 09:40:38 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67325 00:12:12.270 09:40:39 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:12.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.843 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.843 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.843 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.843 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.843 00:12:12.843 real 2m29.226s 00:12:12.843 user 1m50.768s 00:12:12.843 sys 0m16.863s 00:12:12.843 ************************************ 00:12:12.843 END TEST sw_hotplug 00:12:12.843 ************************************ 00:12:12.843 09:40:40 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.843 09:40:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.843 09:40:40 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:12.843 09:40:40 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.843 09:40:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:12.843 09:40:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.843 09:40:40 -- common/autotest_common.sh@10 -- # set +x 00:12:13.106 ************************************ 00:12:13.106 START TEST nvme_xnvme 00:12:13.106 ************************************ 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:13.106 * Looking for test storage... 00:12:13.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:13.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.106 --rc genhtml_branch_coverage=1 00:12:13.106 --rc genhtml_function_coverage=1 00:12:13.106 --rc genhtml_legend=1 00:12:13.106 --rc geninfo_all_blocks=1 00:12:13.106 --rc geninfo_unexecuted_blocks=1 00:12:13.106 00:12:13.106 ' 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:13.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.106 --rc genhtml_branch_coverage=1 00:12:13.106 --rc genhtml_function_coverage=1 00:12:13.106 --rc genhtml_legend=1 00:12:13.106 --rc geninfo_all_blocks=1 00:12:13.106 --rc geninfo_unexecuted_blocks=1 00:12:13.106 00:12:13.106 ' 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:13.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.106 --rc genhtml_branch_coverage=1 00:12:13.106 --rc genhtml_function_coverage=1 00:12:13.106 --rc genhtml_legend=1 00:12:13.106 --rc geninfo_all_blocks=1 00:12:13.106 --rc geninfo_unexecuted_blocks=1 00:12:13.106 00:12:13.106 ' 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:13.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.106 --rc genhtml_branch_coverage=1 00:12:13.106 --rc genhtml_function_coverage=1 00:12:13.106 --rc genhtml_legend=1 00:12:13.106 --rc geninfo_all_blocks=1 00:12:13.106 --rc geninfo_unexecuted_blocks=1 00:12:13.106 00:12:13.106 ' 00:12:13.106 09:40:40 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.106 09:40:40 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.106 09:40:40 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.106 09:40:40 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.106 09:40:40 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.106 09:40:40 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:13.106 09:40:40 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.106 09:40:40 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.106 09:40:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.106 ************************************ 00:12:13.106 START TEST xnvme_to_malloc_dd_copy 00:12:13.106 ************************************ 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:13.106 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:13.107 09:40:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.107 { 00:12:13.107 "subsystems": [ 00:12:13.107 { 00:12:13.107 "subsystem": "bdev", 00:12:13.107 "config": [ 00:12:13.107 { 00:12:13.107 "params": { 00:12:13.107 "block_size": 512, 00:12:13.107 "num_blocks": 2097152, 00:12:13.107 "name": "malloc0" 00:12:13.107 }, 00:12:13.107 "method": "bdev_malloc_create" 00:12:13.107 }, 00:12:13.107 { 00:12:13.107 "params": { 00:12:13.107 "io_mechanism": "libaio", 00:12:13.107 "filename": "/dev/nullb0", 00:12:13.107 "name": "null0" 00:12:13.107 }, 00:12:13.107 "method": "bdev_xnvme_create" 00:12:13.107 }, 00:12:13.107 { 00:12:13.107 "method": "bdev_wait_for_examine" 00:12:13.107 } 00:12:13.107 ] 00:12:13.107 } 00:12:13.107 ] 00:12:13.107 } 00:12:13.368 [2024-11-07 09:40:40.788333] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:13.368 [2024-11-07 09:40:40.788481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68692 ] 00:12:13.368 [2024-11-07 09:40:40.949788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.630 [2024-11-07 09:40:41.069734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.553  [2024-11-07T09:40:44.609Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-07T09:40:45.180Z] Copying: 452/1024 [MB] (226 MBps) [2024-11-07T09:40:46.556Z] Copying: 695/1024 [MB] (243 MBps) [2024-11-07T09:40:46.556Z] Copying: 997/1024 [MB] (301 MBps) [2024-11-07T09:40:48.457Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:12:20.786 00:12:20.786 09:40:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:20.786 09:40:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:20.786 09:40:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:20.786 09:40:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:20.786 { 00:12:20.786 "subsystems": [ 00:12:20.786 { 00:12:20.786 "subsystem": "bdev", 00:12:20.786 "config": [ 00:12:20.786 { 00:12:20.786 "params": { 00:12:20.786 "block_size": 512, 00:12:20.786 "num_blocks": 2097152, 00:12:20.786 "name": "malloc0" 00:12:20.786 }, 00:12:20.787 "method": "bdev_malloc_create" 00:12:20.787 }, 00:12:20.787 { 00:12:20.787 "params": { 00:12:20.787 "io_mechanism": "libaio", 00:12:20.787 "filename": "/dev/nullb0", 00:12:20.787 "name": "null0" 00:12:20.787 }, 00:12:20.787 "method": "bdev_xnvme_create" 00:12:20.787 }, 00:12:20.787 { 00:12:20.787 "method": "bdev_wait_for_examine" 00:12:20.787 } 00:12:20.787 ] 00:12:20.787 } 00:12:20.787 ] 00:12:20.787 } 00:12:20.787 [2024-11-07 09:40:48.237679] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:20.787 [2024-11-07 09:40:48.237795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68785 ] 00:12:20.787 [2024-11-07 09:40:48.393373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.045 [2024-11-07 09:40:48.468872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.947  [2024-11-07T09:40:51.553Z] Copying: 303/1024 [MB] (303 MBps) [2024-11-07T09:40:52.488Z] Copying: 608/1024 [MB] (304 MBps) [2024-11-07T09:40:52.747Z] Copying: 913/1024 [MB] (305 MBps) [2024-11-07T09:40:54.649Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:12:26.978 00:12:26.979 09:40:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:26.979 09:40:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:26.979 09:40:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:26.979 09:40:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:26.979 09:40:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:26.979 09:40:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.979 { 00:12:26.979 "subsystems": [ 00:12:26.979 { 00:12:26.979 "subsystem": "bdev", 00:12:26.979 "config": [ 00:12:26.979 { 00:12:26.979 "params": { 00:12:26.979 "block_size": 512, 00:12:26.979 "num_blocks": 2097152, 00:12:26.979 "name": "malloc0" 00:12:26.979 }, 00:12:26.979 "method": "bdev_malloc_create" 00:12:26.979 }, 00:12:26.979 { 00:12:26.979 "params": { 00:12:26.979 "io_mechanism": "io_uring", 00:12:26.979 "filename": "/dev/nullb0", 00:12:26.979 "name": "null0" 00:12:26.979 }, 00:12:26.979 "method": "bdev_xnvme_create" 00:12:26.979 }, 00:12:26.979 { 00:12:26.979 "method": "bdev_wait_for_examine" 00:12:26.979 } 00:12:26.979 ] 00:12:26.979 } 00:12:26.979 ] 00:12:26.979 } 00:12:26.979 [2024-11-07 09:40:54.584334] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:26.979 [2024-11-07 09:40:54.584452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68861 ] 00:12:27.240 [2024-11-07 09:40:54.740871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.240 [2024-11-07 09:40:54.824402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.144  [2024-11-07T09:40:57.749Z] Copying: 312/1024 [MB] (312 MBps) [2024-11-07T09:40:58.685Z] Copying: 624/1024 [MB] (312 MBps) [2024-11-07T09:40:58.943Z] Copying: 936/1024 [MB] (312 MBps) [2024-11-07T09:41:00.846Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:33.175 00:12:33.175 09:41:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:33.175 09:41:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:33.175 09:41:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:33.175 09:41:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:33.175 { 00:12:33.175 "subsystems": [ 00:12:33.175 { 00:12:33.175 "subsystem": "bdev", 00:12:33.175 "config": [ 00:12:33.175 { 00:12:33.175 "params": { 00:12:33.175 "block_size": 512, 00:12:33.175 "num_blocks": 2097152, 00:12:33.175 "name": "malloc0" 00:12:33.175 }, 00:12:33.175 "method": "bdev_malloc_create" 00:12:33.175 }, 00:12:33.175 { 00:12:33.175 "params": { 00:12:33.175 "io_mechanism": "io_uring", 00:12:33.175 "filename": "/dev/nullb0", 00:12:33.175 "name": "null0" 00:12:33.175 }, 00:12:33.175 "method": "bdev_xnvme_create" 00:12:33.175 }, 00:12:33.175 { 00:12:33.175 "method": "bdev_wait_for_examine" 00:12:33.175 } 00:12:33.175 ] 00:12:33.175 } 00:12:33.175 ] 00:12:33.175 } 00:12:33.175 [2024-11-07 09:41:00.817370] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:33.175 [2024-11-07 09:41:00.817489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68937 ] 00:12:33.434 [2024-11-07 09:41:00.973359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.434 [2024-11-07 09:41:01.058233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.335  [2024-11-07T09:41:03.941Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-07T09:41:04.876Z] Copying: 634/1024 [MB] (317 MBps) [2024-11-07T09:41:05.135Z] Copying: 951/1024 [MB] (317 MBps) [2024-11-07T09:41:07.039Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:12:39.368 00:12:39.368 09:41:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:39.368 09:41:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:39.368 00:12:39.368 real 0m26.247s 00:12:39.368 user 0m22.983s 00:12:39.368 sys 0m2.746s 00:12:39.368 09:41:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.368 09:41:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:39.368 ************************************ 00:12:39.368 END TEST xnvme_to_malloc_dd_copy 00:12:39.368 ************************************ 00:12:39.368 09:41:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:39.368 09:41:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:39.368 09:41:06 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:39.368 09:41:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.368 ************************************ 00:12:39.368 START TEST xnvme_bdevperf 00:12:39.368 ************************************ 00:12:39.368 09:41:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:12:39.368 09:41:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:39.368 09:41:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:39.368 09:41:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:39.368 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:39.369 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:39.369 09:41:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:39.369 09:41:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:39.369 09:41:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.645 { 00:12:39.645 "subsystems": [ 00:12:39.645 { 00:12:39.645 "subsystem": "bdev", 00:12:39.645 "config": [ 00:12:39.645 { 00:12:39.645 "params": { 00:12:39.645 "io_mechanism": "libaio", 00:12:39.645 "filename": "/dev/nullb0", 00:12:39.645 "name": "null0" 00:12:39.645 }, 00:12:39.645 "method": "bdev_xnvme_create" 00:12:39.645 }, 00:12:39.645 { 00:12:39.645 "method": "bdev_wait_for_examine" 00:12:39.645 } 00:12:39.645 ] 00:12:39.645 } 00:12:39.645 ] 00:12:39.645 } 00:12:39.645 [2024-11-07 09:41:07.076066] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:39.645 [2024-11-07 09:41:07.076181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69031 ] 00:12:39.645 [2024-11-07 09:41:07.240782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.908 [2024-11-07 09:41:07.359194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.170 Running I/O for 5 seconds... 00:12:42.052 153216.00 IOPS, 598.50 MiB/s [2024-11-07T09:41:10.659Z] 167200.00 IOPS, 653.12 MiB/s [2024-11-07T09:41:12.035Z] 178432.00 IOPS, 697.00 MiB/s [2024-11-07T09:41:12.970Z] 184032.00 IOPS, 718.88 MiB/s 00:12:45.299 Latency(us) 00:12:45.299 [2024-11-07T09:41:12.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.299 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:45.299 null0 : 5.00 187511.39 732.47 0.00 0.00 338.93 105.55 2041.70 00:12:45.299 [2024-11-07T09:41:12.970Z] =================================================================================================================== 00:12:45.299 [2024-11-07T09:41:12.970Z] Total : 187511.39 732.47 0.00 0.00 338.93 105.55 2041.70 00:12:45.560 09:41:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:45.560 09:41:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:45.560 09:41:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:45.560 09:41:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:45.560 09:41:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:45.560 09:41:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:45.820 { 00:12:45.820 "subsystems": [ 00:12:45.820 { 00:12:45.820 "subsystem": "bdev", 00:12:45.820 "config": [ 00:12:45.820 { 00:12:45.820 "params": { 00:12:45.820 "io_mechanism": "io_uring", 00:12:45.820 "filename": "/dev/nullb0", 00:12:45.820 "name": "null0" 00:12:45.820 }, 00:12:45.820 "method": "bdev_xnvme_create" 00:12:45.820 }, 00:12:45.820 { 00:12:45.820 "method": "bdev_wait_for_examine" 00:12:45.820 } 00:12:45.820 ] 00:12:45.820 } 00:12:45.820 ] 00:12:45.820 } 00:12:45.820 [2024-11-07 09:41:13.285938] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:45.820 [2024-11-07 09:41:13.286053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69105 ] 00:12:45.820 [2024-11-07 09:41:13.443784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.079 [2024-11-07 09:41:13.532193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.079 Running I/O for 5 seconds... 00:12:48.413 232384.00 IOPS, 907.75 MiB/s [2024-11-07T09:41:17.019Z] 232256.00 IOPS, 907.25 MiB/s [2024-11-07T09:41:17.954Z] 232192.00 IOPS, 907.00 MiB/s [2024-11-07T09:41:18.889Z] 232176.00 IOPS, 906.94 MiB/s 00:12:51.218 Latency(us) 00:12:51.218 [2024-11-07T09:41:18.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.218 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:51.218 null0 : 5.00 232168.88 906.91 0.00 0.00 273.69 144.94 1506.07 00:12:51.218 [2024-11-07T09:41:18.889Z] =================================================================================================================== 00:12:51.218 [2024-11-07T09:41:18.889Z] Total : 232168.88 906.91 0.00 0.00 273.69 144.94 1506.07 00:12:51.787 09:41:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:51.787 09:41:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:51.787 00:12:51.787 real 0m12.317s 00:12:51.787 user 0m9.978s 00:12:51.787 sys 0m2.105s 00:12:51.787 09:41:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.787 ************************************ 00:12:51.787 END TEST xnvme_bdevperf 00:12:51.787 ************************************ 00:12:51.787 09:41:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:51.787 00:12:51.787 real 0m38.837s 00:12:51.787 user 0m33.080s 00:12:51.787 sys 0m4.968s 00:12:51.787 ************************************ 00:12:51.787 09:41:19 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:51.787 09:41:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.787 END TEST nvme_xnvme 00:12:51.787 ************************************ 00:12:51.787 09:41:19 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:51.787 09:41:19 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:51.787 09:41:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.787 09:41:19 -- common/autotest_common.sh@10 -- # set +x 00:12:51.787 ************************************ 00:12:51.787 START TEST blockdev_xnvme 00:12:51.787 ************************************ 00:12:51.787 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:52.057 * Looking for test storage... 00:12:52.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.057 09:41:19 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.057 --rc genhtml_branch_coverage=1 00:12:52.057 --rc genhtml_function_coverage=1 00:12:52.057 --rc genhtml_legend=1 00:12:52.057 --rc geninfo_all_blocks=1 00:12:52.057 --rc geninfo_unexecuted_blocks=1 00:12:52.057 00:12:52.057 ' 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.057 --rc genhtml_branch_coverage=1 00:12:52.057 --rc genhtml_function_coverage=1 00:12:52.057 --rc genhtml_legend=1 00:12:52.057 --rc geninfo_all_blocks=1 00:12:52.057 --rc geninfo_unexecuted_blocks=1 00:12:52.057 00:12:52.057 ' 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.057 --rc genhtml_branch_coverage=1 00:12:52.057 --rc genhtml_function_coverage=1 00:12:52.057 --rc genhtml_legend=1 00:12:52.057 --rc geninfo_all_blocks=1 00:12:52.057 --rc geninfo_unexecuted_blocks=1 00:12:52.057 00:12:52.057 ' 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.057 --rc genhtml_branch_coverage=1 00:12:52.057 --rc genhtml_function_coverage=1 00:12:52.057 --rc genhtml_legend=1 00:12:52.057 --rc geninfo_all_blocks=1 00:12:52.057 --rc geninfo_unexecuted_blocks=1 00:12:52.057 00:12:52.057 ' 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69247 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69247 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69247 ']' 00:12:52.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.057 09:41:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.057 09:41:19 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:52.057 [2024-11-07 09:41:19.651767] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:52.057 [2024-11-07 09:41:19.651912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69247 ] 00:12:52.318 [2024-11-07 09:41:19.811506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.318 [2024-11-07 09:41:19.898078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.885 09:41:20 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.885 09:41:20 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:12:52.885 09:41:20 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:52.885 09:41:20 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:52.885 09:41:20 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:52.885 09:41:20 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:52.885 09:41:20 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:53.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.403 Waiting for block devices as requested 00:12:53.403 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.403 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.403 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.661 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.967 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:58.967 nvme0n1 00:12:58.967 nvme1n1 00:12:58.967 nvme2n1 00:12:58.967 nvme2n2 00:12:58.967 nvme2n3 00:12:58.967 nvme3n1 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:58.967 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.967 09:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f79cbb70-0fce-49df-a4ea-22d63b2e66a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f79cbb70-0fce-49df-a4ea-22d63b2e66a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dfcb5b88-6e1d-4ca6-95da-67f2bac1d788"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dfcb5b88-6e1d-4ca6-95da-67f2bac1d788",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "04d87224-460d-4472-bce9-9aaef2b2b591"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "04d87224-460d-4472-bce9-9aaef2b2b591",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "76e3f67e-e88a-40a5-8b02-e6c3283960be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76e3f67e-e88a-40a5-8b02-e6c3283960be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "784d49d1-3767-48fb-8175-7b6afd8d3651"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "784d49d1-3767-48fb-8175-7b6afd8d3651",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "46ee10c8-104a-4cad-900d-c01216fa2a19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "46ee10c8-104a-4cad-900d-c01216fa2a19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:58.968 09:41:26 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69247 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69247 ']' 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69247 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69247 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:58.968 killing process with pid 69247 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69247' 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69247 00:12:58.968 09:41:26 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69247 00:13:00.346 09:41:27 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:00.346 09:41:27 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:00.346 09:41:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:00.346 09:41:27 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.346 09:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.346 ************************************ 00:13:00.346 START TEST bdev_hello_world 00:13:00.346 ************************************ 00:13:00.346 09:41:27 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:00.346 [2024-11-07 09:41:27.652444] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:00.346 [2024-11-07 09:41:27.652565] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69604 ] 00:13:00.346 [2024-11-07 09:41:27.809468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.346 [2024-11-07 09:41:27.891791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.605 [2024-11-07 09:41:28.176998] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:00.605 [2024-11-07 09:41:28.177043] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:00.605 [2024-11-07 09:41:28.177058] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:00.606 [2024-11-07 09:41:28.178891] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:00.606 [2024-11-07 09:41:28.179342] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:00.606 [2024-11-07 09:41:28.179366] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:00.606 [2024-11-07 09:41:28.179985] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:00.606 00:13:00.606 [2024-11-07 09:41:28.180018] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:01.549 00:13:01.549 real 0m1.336s 00:13:01.549 user 0m1.062s 00:13:01.549 sys 0m0.159s 00:13:01.549 09:41:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.549 ************************************ 00:13:01.549 END TEST bdev_hello_world 00:13:01.549 ************************************ 00:13:01.549 09:41:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 09:41:28 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:01.549 09:41:28 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:01.549 09:41:28 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.549 09:41:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 ************************************ 00:13:01.549 START TEST bdev_bounds 00:13:01.549 ************************************ 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:13:01.549 Process bdevio pid: 69642 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69642 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69642' 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69642 00:13:01.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69642 ']' 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.549 09:41:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:01.549 [2024-11-07 09:41:29.065970] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:01.549 [2024-11-07 09:41:29.066124] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69642 ] 00:13:01.811 [2024-11-07 09:41:29.232426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.811 [2024-11-07 09:41:29.357426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.811 [2024-11-07 09:41:29.357787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.811 [2024-11-07 09:41:29.357812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.384 09:41:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.384 09:41:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:13:02.384 09:41:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:02.646 I/O targets: 00:13:02.646 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:02.646 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:02.646 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:02.646 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:02.646 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:02.646 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:02.646 00:13:02.646 00:13:02.646 CUnit - A unit testing framework for C - Version 2.1-3 00:13:02.646 http://cunit.sourceforge.net/ 00:13:02.646 00:13:02.646 00:13:02.646 Suite: bdevio tests on: nvme3n1 00:13:02.646 Test: blockdev write read block ...passed 00:13:02.646 Test: blockdev write zeroes read block ...passed 00:13:02.646 Test: blockdev write zeroes read no split ...passed 00:13:02.646 Test: blockdev write zeroes read split ...passed 00:13:02.646 Test: blockdev write zeroes read split partial ...passed 00:13:02.646 Test: blockdev reset ...passed 00:13:02.646 Test: blockdev write read 8 blocks ...passed 00:13:02.646 Test: blockdev write read size > 128k ...passed 00:13:02.646 Test: blockdev write read invalid size ...passed 00:13:02.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.646 Test: blockdev write read max offset ...passed 00:13:02.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.646 Test: blockdev writev readv 8 blocks ...passed 00:13:02.646 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.646 Test: blockdev writev readv block ...passed 00:13:02.646 Test: blockdev writev readv size > 128k ...passed 00:13:02.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.646 Test: blockdev comparev and writev ...passed 00:13:02.646 Test: blockdev nvme passthru rw ...passed 00:13:02.646 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.646 Test: blockdev nvme admin passthru ...passed 00:13:02.646 Test: blockdev copy ...passed 00:13:02.646 Suite: bdevio tests on: nvme2n3 00:13:02.646 Test: blockdev write read block ...passed 00:13:02.646 Test: blockdev write zeroes read block ...passed 00:13:02.646 Test: blockdev write zeroes read no split ...passed 00:13:02.646 Test: blockdev write zeroes read split ...passed 00:13:02.646 Test: blockdev write zeroes read split partial ...passed 00:13:02.646 Test: blockdev reset ...passed 00:13:02.646 Test: blockdev write read 8 blocks ...passed 00:13:02.646 Test: blockdev write read size > 128k ...passed 00:13:02.646 Test: blockdev write read invalid size ...passed 00:13:02.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.646 Test: blockdev write read max offset ...passed 00:13:02.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.646 Test: blockdev writev readv 8 blocks ...passed 00:13:02.646 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.646 Test: blockdev writev readv block ...passed 00:13:02.646 Test: blockdev writev readv size > 128k ...passed 00:13:02.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.646 Test: blockdev comparev and writev ...passed 00:13:02.646 Test: blockdev nvme passthru rw ...passed 00:13:02.646 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.646 Test: blockdev nvme admin passthru ...passed 00:13:02.646 Test: blockdev copy ...passed 00:13:02.646 Suite: bdevio tests on: nvme2n2 00:13:02.646 Test: blockdev write read block ...passed 00:13:02.646 Test: blockdev write zeroes read block ...passed 00:13:02.646 Test: blockdev write zeroes read no split ...passed 00:13:02.646 Test: blockdev write zeroes read split ...passed 00:13:02.646 Test: blockdev write zeroes read split partial ...passed 00:13:02.646 Test: blockdev reset ...passed 00:13:02.646 Test: blockdev write read 8 blocks ...passed 00:13:02.646 Test: blockdev write read size > 128k ...passed 00:13:02.646 Test: blockdev write read invalid size ...passed 00:13:02.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.646 Test: blockdev write read max offset ...passed 00:13:02.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.646 Test: blockdev writev readv 8 blocks ...passed 00:13:02.646 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.646 Test: blockdev writev readv block ...passed 00:13:02.646 Test: blockdev writev readv size > 128k ...passed 00:13:02.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.646 Test: blockdev comparev and writev ...passed 00:13:02.646 Test: blockdev nvme passthru rw ...passed 00:13:02.646 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.646 Test: blockdev nvme admin passthru ...passed 00:13:02.646 Test: blockdev copy ...passed 00:13:02.646 Suite: bdevio tests on: nvme2n1 00:13:02.646 Test: blockdev write read block ...passed 00:13:02.646 Test: blockdev write zeroes read block ...passed 00:13:02.908 Test: blockdev write zeroes read no split ...passed 00:13:02.908 Test: blockdev write zeroes read split ...passed 00:13:02.908 Test: blockdev write zeroes read split partial ...passed 00:13:02.908 Test: blockdev reset ...passed 00:13:02.908 Test: blockdev write read 8 blocks ...passed 00:13:02.908 Test: blockdev write read size > 128k ...passed 00:13:02.908 Test: blockdev write read invalid size ...passed 00:13:02.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.908 Test: blockdev write read max offset ...passed 00:13:02.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.908 Test: blockdev writev readv 8 blocks ...passed 00:13:02.908 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.908 Test: blockdev writev readv block ...passed 00:13:02.908 Test: blockdev writev readv size > 128k ...passed 00:13:02.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.908 Test: blockdev comparev and writev ...passed 00:13:02.908 Test: blockdev nvme passthru rw ...passed 00:13:02.908 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.908 Test: blockdev nvme admin passthru ...passed 00:13:02.908 Test: blockdev copy ...passed 00:13:02.908 Suite: bdevio tests on: nvme1n1 00:13:02.908 Test: blockdev write read block ...passed 00:13:02.908 Test: blockdev write zeroes read block ...passed 00:13:02.908 Test: blockdev write zeroes read no split ...passed 00:13:02.908 Test: blockdev write zeroes read split ...passed 00:13:02.908 Test: blockdev write zeroes read split partial ...passed 00:13:02.908 Test: blockdev reset ...passed 00:13:02.908 Test: blockdev write read 8 blocks ...passed 00:13:02.908 Test: blockdev write read size > 128k ...passed 00:13:02.908 Test: blockdev write read invalid size ...passed 00:13:02.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.908 Test: blockdev write read max offset ...passed 00:13:02.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.908 Test: blockdev writev readv 8 blocks ...passed 00:13:02.908 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.908 Test: blockdev writev readv block ...passed 00:13:02.908 Test: blockdev writev readv size > 128k ...passed 00:13:02.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.908 Test: blockdev comparev and writev ...passed 00:13:02.908 Test: blockdev nvme passthru rw ...passed 00:13:02.908 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.908 Test: blockdev nvme admin passthru ...passed 00:13:02.908 Test: blockdev copy ...passed 00:13:02.908 Suite: bdevio tests on: nvme0n1 00:13:02.908 Test: blockdev write read block ...passed 00:13:02.908 Test: blockdev write zeroes read block ...passed 00:13:02.908 Test: blockdev write zeroes read no split ...passed 00:13:02.908 Test: blockdev write zeroes read split ...passed 00:13:02.908 Test: blockdev write zeroes read split partial ...passed 00:13:02.908 Test: blockdev reset ...passed 00:13:02.908 Test: blockdev write read 8 blocks ...passed 00:13:02.908 Test: blockdev write read size > 128k ...passed 00:13:02.908 Test: blockdev write read invalid size ...passed 00:13:02.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:02.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:02.908 Test: blockdev write read max offset ...passed 00:13:02.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:02.908 Test: blockdev writev readv 8 blocks ...passed 00:13:02.908 Test: blockdev writev readv 30 x 1block ...passed 00:13:02.908 Test: blockdev writev readv block ...passed 00:13:02.908 Test: blockdev writev readv size > 128k ...passed 00:13:02.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:02.908 Test: blockdev comparev and writev ...passed 00:13:02.908 Test: blockdev nvme passthru rw ...passed 00:13:02.908 Test: blockdev nvme passthru vendor specific ...passed 00:13:02.908 Test: blockdev nvme admin passthru ...passed 00:13:02.908 Test: blockdev copy ...passed 00:13:02.908 00:13:02.908 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.908 suites 6 6 n/a 0 0 00:13:02.908 tests 138 138 138 0 0 00:13:02.908 asserts 780 780 780 0 n/a 00:13:02.908 00:13:02.908 Elapsed time = 1.214 seconds 00:13:02.908 0 00:13:02.908 09:41:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69642 00:13:02.908 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69642 ']' 00:13:02.908 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69642 00:13:02.908 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69642 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:03.169 killing process with pid 69642 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69642' 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69642 00:13:03.169 09:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69642 00:13:03.736 09:41:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:03.736 00:13:03.736 real 0m2.291s 00:13:03.736 user 0m5.664s 00:13:03.736 sys 0m0.392s 00:13:03.736 09:41:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.736 09:41:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:03.736 ************************************ 00:13:03.736 END TEST bdev_bounds 00:13:03.736 ************************************ 00:13:03.736 09:41:31 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:03.736 09:41:31 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:03.736 09:41:31 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.736 09:41:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.736 ************************************ 00:13:03.736 START TEST bdev_nbd 00:13:03.736 ************************************ 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:03.736 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69698 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69698 /var/tmp/spdk-nbd.sock 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69698 ']' 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:03.737 09:41:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:03.995 [2024-11-07 09:41:31.417581] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:03.995 [2024-11-07 09:41:31.417709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.995 [2024-11-07 09:41:31.571407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.995 [2024-11-07 09:41:31.646942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.562 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.821 1+0 records in 00:13:04.821 1+0 records out 00:13:04.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283554 s, 14.4 MB/s 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.821 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.080 1+0 records in 00:13:05.080 1+0 records out 00:13:05.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521807 s, 7.8 MB/s 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.080 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.338 1+0 records in 00:13:05.338 1+0 records out 00:13:05.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518319 s, 7.9 MB/s 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.338 09:41:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.599 1+0 records in 00:13:05.599 1+0 records out 00:13:05.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121956 s, 3.4 MB/s 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.599 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.861 1+0 records in 00:13:05.861 1+0 records out 00:13:05.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857196 s, 4.8 MB/s 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:05.861 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.122 1+0 records in 00:13:06.122 1+0 records out 00:13:06.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136055 s, 3.0 MB/s 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:06.122 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd0", 00:13:06.384 "bdev_name": "nvme0n1" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd1", 00:13:06.384 "bdev_name": "nvme1n1" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd2", 00:13:06.384 "bdev_name": "nvme2n1" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd3", 00:13:06.384 "bdev_name": "nvme2n2" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd4", 00:13:06.384 "bdev_name": "nvme2n3" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd5", 00:13:06.384 "bdev_name": "nvme3n1" 00:13:06.384 } 00:13:06.384 ]' 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd0", 00:13:06.384 "bdev_name": "nvme0n1" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd1", 00:13:06.384 "bdev_name": "nvme1n1" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd2", 00:13:06.384 "bdev_name": "nvme2n1" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd3", 00:13:06.384 "bdev_name": "nvme2n2" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd4", 00:13:06.384 "bdev_name": "nvme2n3" 00:13:06.384 }, 00:13:06.384 { 00:13:06.384 "nbd_device": "/dev/nbd5", 00:13:06.384 "bdev_name": "nvme3n1" 00:13:06.384 } 00:13:06.384 ]' 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.384 09:41:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.645 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.906 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:07.167 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:07.167 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.168 09:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.429 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.689 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:07.950 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.951 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:08.212 /dev/nbd0 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.212 1+0 records in 00:13:08.212 1+0 records out 00:13:08.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973593 s, 4.2 MB/s 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:08.212 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:08.473 /dev/nbd1 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.473 1+0 records in 00:13:08.473 1+0 records out 00:13:08.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848827 s, 4.8 MB/s 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:08.473 09:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:08.735 /dev/nbd10 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.735 1+0 records in 00:13:08.735 1+0 records out 00:13:08.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012602 s, 3.3 MB/s 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:08.735 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:08.735 /dev/nbd11 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.998 1+0 records in 00:13:08.998 1+0 records out 00:13:08.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105077 s, 3.9 MB/s 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:08.998 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:08.998 /dev/nbd12 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.260 1+0 records in 00:13:09.260 1+0 records out 00:13:09.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118522 s, 3.5 MB/s 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:09.260 /dev/nbd13 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:09.260 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.522 1+0 records in 00:13:09.522 1+0 records out 00:13:09.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115657 s, 3.5 MB/s 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.522 09:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.522 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd0", 00:13:09.522 "bdev_name": "nvme0n1" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd1", 00:13:09.522 "bdev_name": "nvme1n1" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd10", 00:13:09.522 "bdev_name": "nvme2n1" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd11", 00:13:09.522 "bdev_name": "nvme2n2" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd12", 00:13:09.522 "bdev_name": "nvme2n3" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd13", 00:13:09.522 "bdev_name": "nvme3n1" 00:13:09.522 } 00:13:09.522 ]' 00:13:09.522 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd0", 00:13:09.522 "bdev_name": "nvme0n1" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd1", 00:13:09.522 "bdev_name": "nvme1n1" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd10", 00:13:09.522 "bdev_name": "nvme2n1" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd11", 00:13:09.522 "bdev_name": "nvme2n2" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd12", 00:13:09.522 "bdev_name": "nvme2n3" 00:13:09.522 }, 00:13:09.522 { 00:13:09.522 "nbd_device": "/dev/nbd13", 00:13:09.522 "bdev_name": "nvme3n1" 00:13:09.522 } 00:13:09.522 ]' 00:13:09.522 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:09.783 /dev/nbd1 00:13:09.783 /dev/nbd10 00:13:09.783 /dev/nbd11 00:13:09.783 /dev/nbd12 00:13:09.783 /dev/nbd13' 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:09.783 /dev/nbd1 00:13:09.783 /dev/nbd10 00:13:09.783 /dev/nbd11 00:13:09.783 /dev/nbd12 00:13:09.783 /dev/nbd13' 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:09.783 256+0 records in 00:13:09.783 256+0 records out 00:13:09.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996885 s, 105 MB/s 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:09.783 256+0 records in 00:13:09.783 256+0 records out 00:13:09.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231972 s, 4.5 MB/s 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.783 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:10.387 256+0 records in 00:13:10.388 256+0 records out 00:13:10.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.310537 s, 3.4 MB/s 00:13:10.388 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.388 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:10.388 256+0 records in 00:13:10.388 256+0 records out 00:13:10.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201068 s, 5.2 MB/s 00:13:10.388 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.388 09:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:10.650 256+0 records in 00:13:10.650 256+0 records out 00:13:10.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112616 s, 9.3 MB/s 00:13:10.650 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.650 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:10.650 256+0 records in 00:13:10.650 256+0 records out 00:13:10.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154478 s, 6.8 MB/s 00:13:10.650 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:10.650 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:10.927 256+0 records in 00:13:10.928 256+0 records out 00:13:10.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180145 s, 5.8 MB/s 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.928 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.189 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.451 09:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.713 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.975 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.237 09:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:12.500 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:12.761 malloc_lvol_verify 00:13:12.761 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:13.022 2a1be610-9398-40f3-a293-4aff3f892b65 00:13:13.022 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:13.282 59ecbfa2-708a-456d-91b4-d6a05029f89c 00:13:13.282 09:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:13.545 /dev/nbd0 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:13.545 mke2fs 1.47.0 (5-Feb-2023) 00:13:13.545 Discarding device blocks: 0/4096 done 00:13:13.545 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:13.545 00:13:13.545 Allocating group tables: 0/1 done 00:13:13.545 Writing inode tables: 0/1 done 00:13:13.545 Creating journal (1024 blocks): done 00:13:13.545 Writing superblocks and filesystem accounting information: 0/1 done 00:13:13.545 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.545 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69698 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69698 ']' 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69698 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69698 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.805 killing process with pid 69698 00:13:13.805 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.806 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69698' 00:13:13.806 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69698 00:13:13.806 09:41:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69698 00:13:14.748 09:41:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:14.748 00:13:14.748 real 0m10.717s 00:13:14.748 user 0m14.502s 00:13:14.748 sys 0m3.725s 00:13:14.748 09:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:14.748 ************************************ 00:13:14.748 END TEST bdev_nbd 00:13:14.748 ************************************ 00:13:14.748 09:41:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:14.748 09:41:42 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:14.748 09:41:42 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:14.748 09:41:42 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:14.748 09:41:42 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:14.748 09:41:42 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:14.748 09:41:42 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.748 09:41:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.748 ************************************ 00:13:14.748 START TEST bdev_fio 00:13:14.748 ************************************ 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:13:14.748 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:14.748 ************************************ 00:13:14.748 START TEST bdev_fio_rw_verify 00:13:14.748 ************************************ 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.748 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.749 09:41:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:14.749 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:14.749 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:14.749 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:14.749 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:14.749 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:14.749 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:14.749 fio-3.35 00:13:14.749 Starting 6 threads 00:13:26.987 00:13:26.987 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70104: Thu Nov 7 09:41:53 2024 00:13:26.987 read: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(433MiB/10001msec) 00:13:26.987 slat (usec): min=2, max=4831, avg= 6.83, stdev=28.27 00:13:26.987 clat (usec): min=92, max=10227, avg=1841.39, stdev=931.32 00:13:26.987 lat (usec): min=99, max=10257, avg=1848.22, stdev=932.06 00:13:26.987 clat percentiles (usec): 00:13:26.987 | 50.000th=[ 1713], 99.000th=[ 4817], 99.900th=[ 6652], 99.990th=[ 8455], 00:13:26.987 | 99.999th=[10159] 00:13:26.987 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(448MiB/10001msec); 0 zone resets 00:13:26.987 slat (usec): min=12, max=5282, avg=46.59, stdev=178.21 00:13:26.987 clat (usec): min=103, max=10353, avg=2050.29, stdev=984.49 00:13:26.987 lat (usec): min=116, max=10486, avg=2096.88, stdev=999.23 00:13:26.987 clat percentiles (usec): 00:13:26.987 | 50.000th=[ 1909], 99.000th=[ 5080], 99.900th=[ 6849], 99.990th=[ 9896], 00:13:26.987 | 99.999th=[10290] 00:13:26.987 bw ( KiB/s): min=35653, max=53088, per=99.77%, avg=45785.89, stdev=805.49, samples=114 00:13:26.987 iops : min= 8912, max=13272, avg=11445.58, stdev=201.39, samples=114 00:13:26.987 lat (usec) : 100=0.01%, 250=0.40%, 500=2.42%, 750=4.41%, 1000=6.64% 00:13:26.987 lat (msec) : 2=45.03%, 4=37.61%, 10=3.49%, 20=0.01% 00:13:26.987 cpu : usr=45.27%, sys=32.05%, ctx=4316, majf=0, minf=12441 00:13:26.987 IO depths : 1=11.4%, 2=23.8%, 4=51.3%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.987 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.987 issued rwts: total=110879,114740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:26.987 00:13:26.987 Run status group 0 (all jobs): 00:13:26.987 READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=433MiB (454MB), run=10001-10001msec 00:13:26.987 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=448MiB (470MB), run=10001-10001msec 00:13:26.987 ----------------------------------------------------- 00:13:26.987 Suppressions used: 00:13:26.987 count bytes template 00:13:26.987 6 48 /usr/src/fio/parse.c 00:13:26.987 3786 363456 /usr/src/fio/iolog.c 00:13:26.987 1 8 libtcmalloc_minimal.so 00:13:26.987 1 904 libcrypto.so 00:13:26.987 ----------------------------------------------------- 00:13:26.987 00:13:26.987 00:13:26.987 real 0m11.945s 00:13:26.987 user 0m28.642s 00:13:26.987 sys 0m19.556s 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.987 ************************************ 00:13:26.987 END TEST bdev_fio_rw_verify 00:13:26.987 ************************************ 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:26.987 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f79cbb70-0fce-49df-a4ea-22d63b2e66a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f79cbb70-0fce-49df-a4ea-22d63b2e66a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dfcb5b88-6e1d-4ca6-95da-67f2bac1d788"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dfcb5b88-6e1d-4ca6-95da-67f2bac1d788",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "04d87224-460d-4472-bce9-9aaef2b2b591"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "04d87224-460d-4472-bce9-9aaef2b2b591",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "76e3f67e-e88a-40a5-8b02-e6c3283960be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76e3f67e-e88a-40a5-8b02-e6c3283960be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "784d49d1-3767-48fb-8175-7b6afd8d3651"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "784d49d1-3767-48fb-8175-7b6afd8d3651",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "46ee10c8-104a-4cad-900d-c01216fa2a19"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "46ee10c8-104a-4cad-900d-c01216fa2a19",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.988 /home/vagrant/spdk_repo/spdk 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:26.988 00:13:26.988 real 0m12.118s 00:13:26.988 user 0m28.715s 00:13:26.988 sys 0m19.637s 00:13:26.988 ************************************ 00:13:26.988 END TEST bdev_fio 00:13:26.988 ************************************ 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.988 09:41:54 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:26.988 09:41:54 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:26.988 09:41:54 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:26.988 09:41:54 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:26.988 09:41:54 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:26.988 09:41:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.988 ************************************ 00:13:26.988 START TEST bdev_verify 00:13:26.988 ************************************ 00:13:26.988 09:41:54 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:26.988 [2024-11-07 09:41:54.388964] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:26.988 [2024-11-07 09:41:54.389115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70276 ] 00:13:26.988 [2024-11-07 09:41:54.554076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:27.250 [2024-11-07 09:41:54.700689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.250 [2024-11-07 09:41:54.700714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.824 Running I/O for 5 seconds... 00:13:29.716 22208.00 IOPS, 86.75 MiB/s [2024-11-07T09:41:58.776Z] 22826.50 IOPS, 89.17 MiB/s [2024-11-07T09:41:59.721Z] 22812.33 IOPS, 89.11 MiB/s [2024-11-07T09:42:00.316Z] 23213.25 IOPS, 90.68 MiB/s [2024-11-07T09:42:00.316Z] 23340.80 IOPS, 91.17 MiB/s 00:13:32.645 Latency(us) 00:13:32.645 [2024-11-07T09:42:00.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.645 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x0 length 0xa0000 00:13:32.645 nvme0n1 : 5.04 1855.69 7.25 0.00 0.00 68855.00 5696.59 67350.84 00:13:32.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0xa0000 length 0xa0000 00:13:32.645 nvme0n1 : 5.05 1597.20 6.24 0.00 0.00 79975.53 5898.24 104857.60 00:13:32.645 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x0 length 0xbd0bd 00:13:32.645 nvme1n1 : 5.04 2404.26 9.39 0.00 0.00 52979.39 5948.65 57671.68 00:13:32.645 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:32.645 nvme1n1 : 5.07 2438.65 9.53 0.00 0.00 52169.35 6175.51 70577.23 00:13:32.645 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x0 length 0x80000 00:13:32.645 nvme2n1 : 5.04 1982.05 7.74 0.00 0.00 64240.45 7309.78 72997.02 00:13:32.645 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x80000 length 0x80000 00:13:32.645 nvme2n1 : 5.07 1818.94 7.11 0.00 0.00 69788.03 7461.02 66544.25 00:13:32.645 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x0 length 0x80000 00:13:32.645 nvme2n2 : 5.04 1930.47 7.54 0.00 0.00 65868.61 7309.78 61704.66 00:13:32.645 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x80000 length 0x80000 00:13:32.645 nvme2n2 : 5.07 1793.11 7.00 0.00 0.00 70648.95 8620.50 66140.95 00:13:32.645 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x0 length 0x80000 00:13:32.645 nvme2n3 : 5.04 1929.88 7.54 0.00 0.00 65814.94 7864.32 64931.05 00:13:32.645 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x80000 length 0x80000 00:13:32.645 nvme2n3 : 5.08 1789.86 6.99 0.00 0.00 70725.80 5923.45 67754.14 00:13:32.645 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x0 length 0x20000 00:13:32.645 nvme3n1 : 5.05 1926.09 7.52 0.00 0.00 65829.31 5721.80 70577.23 00:13:32.645 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.645 Verification LBA range: start 0x20000 length 0x20000 00:13:32.645 nvme3n1 : 5.07 1791.75 7.00 0.00 0.00 70449.22 5822.62 70173.93 00:13:32.645 [2024-11-07T09:42:00.316Z] =================================================================================================================== 00:13:32.645 [2024-11-07T09:42:00.316Z] Total : 23257.95 90.85 0.00 0.00 65564.92 5696.59 104857.60 00:13:33.589 00:13:33.589 real 0m6.870s 00:13:33.589 user 0m11.049s 00:13:33.589 sys 0m1.473s 00:13:33.589 09:42:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.589 09:42:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:33.589 ************************************ 00:13:33.589 END TEST bdev_verify 00:13:33.589 ************************************ 00:13:33.589 09:42:01 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.589 09:42:01 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:33.589 09:42:01 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:33.589 09:42:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.589 ************************************ 00:13:33.589 START TEST bdev_verify_big_io 00:13:33.589 ************************************ 00:13:33.589 09:42:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.850 [2024-11-07 09:42:01.341379] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:33.850 [2024-11-07 09:42:01.341547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70375 ] 00:13:33.850 [2024-11-07 09:42:01.510917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.111 [2024-11-07 09:42:01.660243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.111 [2024-11-07 09:42:01.660341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.683 Running I/O for 5 seconds... 00:13:40.803 1500.00 IOPS, 93.75 MiB/s [2024-11-07T09:42:08.735Z] 2922.00 IOPS, 182.62 MiB/s [2024-11-07T09:42:08.735Z] 3340.33 IOPS, 208.77 MiB/s 00:13:41.064 Latency(us) 00:13:41.064 [2024-11-07T09:42:08.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.064 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x0 length 0xa000 00:13:41.064 nvme0n1 : 5.60 145.71 9.11 0.00 0.00 850467.49 158093.00 871124.68 00:13:41.064 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0xa000 length 0xa000 00:13:41.064 nvme0n1 : 6.02 95.65 5.98 0.00 0.00 1283624.78 136314.88 1703532.70 00:13:41.064 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x0 length 0xbd0b 00:13:41.064 nvme1n1 : 5.60 136.91 8.56 0.00 0.00 878128.18 20467.40 1884210.41 00:13:41.064 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:41.064 nvme1n1 : 5.95 107.58 6.72 0.00 0.00 1071250.67 9124.63 1284102.30 00:13:41.064 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x0 length 0x8000 00:13:41.064 nvme2n1 : 5.71 168.01 10.50 0.00 0.00 699215.14 68964.04 622692.82 00:13:41.064 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x8000 length 0x8000 00:13:41.064 nvme2n1 : 6.02 77.02 4.81 0.00 0.00 1420209.01 62107.96 1464780.01 00:13:41.064 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x0 length 0x8000 00:13:41.064 nvme2n2 : 5.72 142.76 8.92 0.00 0.00 799546.61 147607.24 1503496.66 00:13:41.064 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x8000 length 0x8000 00:13:41.064 nvme2n2 : 6.14 99.04 6.19 0.00 0.00 1061160.67 51218.90 2361715.79 00:13:41.064 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x0 length 0x8000 00:13:41.064 nvme2n3 : 5.79 187.56 11.72 0.00 0.00 603484.48 4738.76 1258291.20 00:13:41.064 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x8000 length 0x8000 00:13:41.064 nvme2n3 : 6.23 159.19 9.95 0.00 0.00 635435.20 3680.10 1245385.65 00:13:41.064 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x0 length 0x2000 00:13:41.064 nvme3n1 : 5.80 138.05 8.63 0.00 0.00 798734.26 2797.88 2606921.26 00:13:41.064 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.064 Verification LBA range: start 0x2000 length 0x2000 00:13:41.064 nvme3n1 : 6.43 232.59 14.54 0.00 0.00 415322.60 1027.15 2942465.58 00:13:41.064 [2024-11-07T09:42:08.735Z] =================================================================================================================== 00:13:41.064 [2024-11-07T09:42:08.735Z] Total : 1690.04 105.63 0.00 0.00 794966.49 1027.15 2942465.58 00:13:42.006 00:13:42.006 real 0m8.182s 00:13:42.006 user 0m14.916s 00:13:42.006 sys 0m0.543s 00:13:42.006 ************************************ 00:13:42.006 END TEST bdev_verify_big_io 00:13:42.006 ************************************ 00:13:42.006 09:42:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.006 09:42:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.006 09:42:09 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.006 09:42:09 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:42.006 09:42:09 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:42.006 09:42:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.006 ************************************ 00:13:42.006 START TEST bdev_write_zeroes 00:13:42.006 ************************************ 00:13:42.006 09:42:09 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.006 [2024-11-07 09:42:09.563991] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:42.006 [2024-11-07 09:42:09.564102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70490 ] 00:13:42.266 [2024-11-07 09:42:09.721124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.266 [2024-11-07 09:42:09.815900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.526 Running I/O for 1 seconds... 00:13:43.913 71616.00 IOPS, 279.75 MiB/s 00:13:43.913 Latency(us) 00:13:43.913 [2024-11-07T09:42:11.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.913 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.913 nvme0n1 : 1.02 11449.38 44.72 0.00 0.00 11169.79 6326.74 22181.42 00:13:43.913 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.913 nvme1n1 : 1.03 13870.56 54.18 0.00 0.00 9212.53 5192.47 20971.52 00:13:43.913 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.913 nvme2n1 : 1.03 11429.51 44.65 0.00 0.00 11109.97 5822.62 19559.98 00:13:43.913 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.913 nvme2n2 : 1.03 11363.27 44.39 0.00 0.00 11167.90 6856.07 23693.78 00:13:43.914 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.914 nvme2n3 : 1.03 11349.94 44.34 0.00 0.00 11175.58 6906.49 23996.26 00:13:43.914 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:43.914 nvme3n1 : 1.03 11335.90 44.28 0.00 0.00 11182.31 7007.31 24298.73 00:13:43.914 [2024-11-07T09:42:11.585Z] =================================================================================================================== 00:13:43.914 [2024-11-07T09:42:11.585Z] Total : 70798.56 276.56 0.00 0.00 10778.17 5192.47 24298.73 00:13:44.486 00:13:44.486 real 0m2.481s 00:13:44.486 user 0m1.847s 00:13:44.486 sys 0m0.474s 00:13:44.486 ************************************ 00:13:44.486 END TEST bdev_write_zeroes 00:13:44.486 ************************************ 00:13:44.486 09:42:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.486 09:42:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:44.486 09:42:12 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.486 09:42:12 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:44.486 09:42:12 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.486 09:42:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.486 ************************************ 00:13:44.486 START TEST bdev_json_nonenclosed 00:13:44.486 ************************************ 00:13:44.486 09:42:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.486 [2024-11-07 09:42:12.121805] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:44.486 [2024-11-07 09:42:12.121952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70540 ] 00:13:44.748 [2024-11-07 09:42:12.288967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.748 [2024-11-07 09:42:12.407148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.748 [2024-11-07 09:42:12.407242] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:44.748 [2024-11-07 09:42:12.407261] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:44.748 [2024-11-07 09:42:12.407273] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:45.009 00:13:45.009 real 0m0.549s 00:13:45.009 user 0m0.327s 00:13:45.009 sys 0m0.115s 00:13:45.009 ************************************ 00:13:45.009 09:42:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.010 09:42:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:45.010 END TEST bdev_json_nonenclosed 00:13:45.010 ************************************ 00:13:45.010 09:42:12 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.010 09:42:12 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:45.010 09:42:12 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.010 09:42:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.010 ************************************ 00:13:45.010 START TEST bdev_json_nonarray 00:13:45.010 ************************************ 00:13:45.010 09:42:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.272 [2024-11-07 09:42:12.731555] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:45.272 [2024-11-07 09:42:12.731723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70564 ] 00:13:45.272 [2024-11-07 09:42:12.887683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.533 [2024-11-07 09:42:13.008619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.533 [2024-11-07 09:42:13.008747] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:45.534 [2024-11-07 09:42:13.008768] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:45.534 [2024-11-07 09:42:13.008779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:45.796 00:13:45.796 real 0m0.545s 00:13:45.796 user 0m0.326s 00:13:45.796 sys 0m0.113s 00:13:45.796 09:42:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.796 ************************************ 00:13:45.796 END TEST bdev_json_nonarray 00:13:45.796 ************************************ 00:13:45.796 09:42:13 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:45.796 09:42:13 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:46.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:46.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.565 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.565 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.825 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.825 00:13:47.825 real 0m55.891s 00:13:47.825 user 1m27.403s 00:13:47.825 sys 0m31.409s 00:13:47.825 09:42:15 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.825 09:42:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.825 ************************************ 00:13:47.825 END TEST blockdev_xnvme 00:13:47.825 ************************************ 00:13:47.825 09:42:15 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:47.825 09:42:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:47.825 09:42:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:47.825 09:42:15 -- common/autotest_common.sh@10 -- # set +x 00:13:47.825 ************************************ 00:13:47.825 START TEST ublk 00:13:47.825 ************************************ 00:13:47.825 09:42:15 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:47.825 * Looking for test storage... 00:13:47.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:47.825 09:42:15 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:47.825 09:42:15 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:47.825 09:42:15 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.085 09:42:15 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.085 09:42:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.085 09:42:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.085 09:42:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.085 09:42:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.085 09:42:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.085 09:42:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.085 09:42:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.085 09:42:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.085 09:42:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.085 09:42:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:48.085 09:42:15 ublk -- scripts/common.sh@345 -- # : 1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.085 09:42:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.085 09:42:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@353 -- # local d=1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.085 09:42:15 ublk -- scripts/common.sh@355 -- # echo 1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.085 09:42:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:48.085 09:42:15 ublk -- scripts/common.sh@353 -- # local d=2 00:13:48.086 09:42:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.086 09:42:15 ublk -- scripts/common.sh@355 -- # echo 2 00:13:48.086 09:42:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.086 09:42:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.086 09:42:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.086 09:42:15 ublk -- scripts/common.sh@368 -- # return 0 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.086 --rc genhtml_branch_coverage=1 00:13:48.086 --rc genhtml_function_coverage=1 00:13:48.086 --rc genhtml_legend=1 00:13:48.086 --rc geninfo_all_blocks=1 00:13:48.086 --rc geninfo_unexecuted_blocks=1 00:13:48.086 00:13:48.086 ' 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.086 --rc genhtml_branch_coverage=1 00:13:48.086 --rc genhtml_function_coverage=1 00:13:48.086 --rc genhtml_legend=1 00:13:48.086 --rc geninfo_all_blocks=1 00:13:48.086 --rc geninfo_unexecuted_blocks=1 00:13:48.086 00:13:48.086 ' 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.086 --rc genhtml_branch_coverage=1 00:13:48.086 --rc genhtml_function_coverage=1 00:13:48.086 --rc genhtml_legend=1 00:13:48.086 --rc geninfo_all_blocks=1 00:13:48.086 --rc geninfo_unexecuted_blocks=1 00:13:48.086 00:13:48.086 ' 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.086 --rc genhtml_branch_coverage=1 00:13:48.086 --rc genhtml_function_coverage=1 00:13:48.086 --rc genhtml_legend=1 00:13:48.086 --rc geninfo_all_blocks=1 00:13:48.086 --rc geninfo_unexecuted_blocks=1 00:13:48.086 00:13:48.086 ' 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:48.086 09:42:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:48.086 09:42:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:48.086 09:42:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:48.086 09:42:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:48.086 09:42:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:48.086 09:42:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:48.086 09:42:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:48.086 09:42:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:48.086 09:42:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.086 09:42:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.086 ************************************ 00:13:48.086 START TEST test_save_ublk_config 00:13:48.086 ************************************ 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70851 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70851 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70851 ']' 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:48.086 09:42:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.086 [2024-11-07 09:42:15.653707] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:48.086 [2024-11-07 09:42:15.653856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70851 ] 00:13:48.346 [2024-11-07 09:42:15.815086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.346 [2024-11-07 09:42:15.932605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.288 [2024-11-07 09:42:16.658655] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:49.288 [2024-11-07 09:42:16.659579] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.288 malloc0 00:13:49.288 [2024-11-07 09:42:16.730795] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:49.288 [2024-11-07 09:42:16.730892] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:49.288 [2024-11-07 09:42:16.730903] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:49.288 [2024-11-07 09:42:16.730911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.288 [2024-11-07 09:42:16.739766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.288 [2024-11-07 09:42:16.739799] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.288 [2024-11-07 09:42:16.746671] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.288 [2024-11-07 09:42:16.746791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:49.288 [2024-11-07 09:42:16.763664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.288 0 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.288 09:42:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.549 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.549 09:42:17 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:49.549 "subsystems": [ 00:13:49.549 { 00:13:49.549 "subsystem": "fsdev", 00:13:49.549 "config": [ 00:13:49.549 { 00:13:49.549 "method": "fsdev_set_opts", 00:13:49.549 "params": { 00:13:49.549 "fsdev_io_pool_size": 65535, 00:13:49.549 "fsdev_io_cache_size": 256 00:13:49.549 } 00:13:49.549 } 00:13:49.549 ] 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "subsystem": "keyring", 00:13:49.549 "config": [] 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "subsystem": "iobuf", 00:13:49.549 "config": [ 00:13:49.549 { 00:13:49.549 "method": "iobuf_set_options", 00:13:49.549 "params": { 00:13:49.549 "small_pool_count": 8192, 00:13:49.549 "large_pool_count": 1024, 00:13:49.549 "small_bufsize": 8192, 00:13:49.549 "large_bufsize": 135168, 00:13:49.549 "enable_numa": false 00:13:49.549 } 00:13:49.549 } 00:13:49.549 ] 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "subsystem": "sock", 00:13:49.549 "config": [ 00:13:49.549 { 00:13:49.549 "method": "sock_set_default_impl", 00:13:49.549 "params": { 00:13:49.549 "impl_name": "posix" 00:13:49.549 } 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "method": "sock_impl_set_options", 00:13:49.549 "params": { 00:13:49.549 "impl_name": "ssl", 00:13:49.549 "recv_buf_size": 4096, 00:13:49.549 "send_buf_size": 4096, 00:13:49.549 "enable_recv_pipe": true, 00:13:49.549 "enable_quickack": false, 00:13:49.549 "enable_placement_id": 0, 00:13:49.549 "enable_zerocopy_send_server": true, 00:13:49.549 "enable_zerocopy_send_client": false, 00:13:49.549 "zerocopy_threshold": 0, 00:13:49.549 "tls_version": 0, 00:13:49.549 "enable_ktls": false 00:13:49.549 } 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "method": "sock_impl_set_options", 00:13:49.549 "params": { 00:13:49.549 "impl_name": "posix", 00:13:49.549 "recv_buf_size": 2097152, 00:13:49.549 "send_buf_size": 2097152, 00:13:49.549 "enable_recv_pipe": true, 00:13:49.549 "enable_quickack": false, 00:13:49.549 "enable_placement_id": 0, 00:13:49.549 "enable_zerocopy_send_server": true, 00:13:49.549 "enable_zerocopy_send_client": false, 00:13:49.549 "zerocopy_threshold": 0, 00:13:49.549 "tls_version": 0, 00:13:49.549 "enable_ktls": false 00:13:49.549 } 00:13:49.549 } 00:13:49.549 ] 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "subsystem": "vmd", 00:13:49.549 "config": [] 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "subsystem": "accel", 00:13:49.549 "config": [ 00:13:49.549 { 00:13:49.549 "method": "accel_set_options", 00:13:49.549 "params": { 00:13:49.549 "small_cache_size": 128, 00:13:49.549 "large_cache_size": 16, 00:13:49.549 "task_count": 2048, 00:13:49.549 "sequence_count": 2048, 00:13:49.549 "buf_count": 2048 00:13:49.549 } 00:13:49.549 } 00:13:49.549 ] 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "subsystem": "bdev", 00:13:49.549 "config": [ 00:13:49.549 { 00:13:49.549 "method": "bdev_set_options", 00:13:49.549 "params": { 00:13:49.549 "bdev_io_pool_size": 65535, 00:13:49.549 "bdev_io_cache_size": 256, 00:13:49.549 "bdev_auto_examine": true, 00:13:49.549 "iobuf_small_cache_size": 128, 00:13:49.549 "iobuf_large_cache_size": 16 00:13:49.549 } 00:13:49.549 }, 00:13:49.549 { 00:13:49.549 "method": "bdev_raid_set_options", 00:13:49.549 "params": { 00:13:49.550 "process_window_size_kb": 1024, 00:13:49.550 "process_max_bandwidth_mb_sec": 0 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "bdev_iscsi_set_options", 00:13:49.550 "params": { 00:13:49.550 "timeout_sec": 30 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "bdev_nvme_set_options", 00:13:49.550 "params": { 00:13:49.550 "action_on_timeout": "none", 00:13:49.550 "timeout_us": 0, 00:13:49.550 "timeout_admin_us": 0, 00:13:49.550 "keep_alive_timeout_ms": 10000, 00:13:49.550 "arbitration_burst": 0, 00:13:49.550 "low_priority_weight": 0, 00:13:49.550 "medium_priority_weight": 0, 00:13:49.550 "high_priority_weight": 0, 00:13:49.550 "nvme_adminq_poll_period_us": 10000, 00:13:49.550 "nvme_ioq_poll_period_us": 0, 00:13:49.550 "io_queue_requests": 0, 00:13:49.550 "delay_cmd_submit": true, 00:13:49.550 "transport_retry_count": 4, 00:13:49.550 "bdev_retry_count": 3, 00:13:49.550 "transport_ack_timeout": 0, 00:13:49.550 "ctrlr_loss_timeout_sec": 0, 00:13:49.550 "reconnect_delay_sec": 0, 00:13:49.550 "fast_io_fail_timeout_sec": 0, 00:13:49.550 "disable_auto_failback": false, 00:13:49.550 "generate_uuids": false, 00:13:49.550 "transport_tos": 0, 00:13:49.550 "nvme_error_stat": false, 00:13:49.550 "rdma_srq_size": 0, 00:13:49.550 "io_path_stat": false, 00:13:49.550 "allow_accel_sequence": false, 00:13:49.550 "rdma_max_cq_size": 0, 00:13:49.550 "rdma_cm_event_timeout_ms": 0, 00:13:49.550 "dhchap_digests": [ 00:13:49.550 "sha256", 00:13:49.550 "sha384", 00:13:49.550 "sha512" 00:13:49.550 ], 00:13:49.550 "dhchap_dhgroups": [ 00:13:49.550 "null", 00:13:49.550 "ffdhe2048", 00:13:49.550 "ffdhe3072", 00:13:49.550 "ffdhe4096", 00:13:49.550 "ffdhe6144", 00:13:49.550 "ffdhe8192" 00:13:49.550 ] 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "bdev_nvme_set_hotplug", 00:13:49.550 "params": { 00:13:49.550 "period_us": 100000, 00:13:49.550 "enable": false 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "bdev_malloc_create", 00:13:49.550 "params": { 00:13:49.550 "name": "malloc0", 00:13:49.550 "num_blocks": 8192, 00:13:49.550 "block_size": 4096, 00:13:49.550 "physical_block_size": 4096, 00:13:49.550 "uuid": "374a5ec4-c7ba-41d3-b496-670e65b5ed2f", 00:13:49.550 "optimal_io_boundary": 0, 00:13:49.550 "md_size": 0, 00:13:49.550 "dif_type": 0, 00:13:49.550 "dif_is_head_of_md": false, 00:13:49.550 "dif_pi_format": 0 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "bdev_wait_for_examine" 00:13:49.550 } 00:13:49.550 ] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "scsi", 00:13:49.550 "config": null 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "scheduler", 00:13:49.550 "config": [ 00:13:49.550 { 00:13:49.550 "method": "framework_set_scheduler", 00:13:49.550 "params": { 00:13:49.550 "name": "static" 00:13:49.550 } 00:13:49.550 } 00:13:49.550 ] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "vhost_scsi", 00:13:49.550 "config": [] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "vhost_blk", 00:13:49.550 "config": [] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "ublk", 00:13:49.550 "config": [ 00:13:49.550 { 00:13:49.550 "method": "ublk_create_target", 00:13:49.550 "params": { 00:13:49.550 "cpumask": "1" 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "ublk_start_disk", 00:13:49.550 "params": { 00:13:49.550 "bdev_name": "malloc0", 00:13:49.550 "ublk_id": 0, 00:13:49.550 "num_queues": 1, 00:13:49.550 "queue_depth": 128 00:13:49.550 } 00:13:49.550 } 00:13:49.550 ] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "nbd", 00:13:49.550 "config": [] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "nvmf", 00:13:49.550 "config": [ 00:13:49.550 { 00:13:49.550 "method": "nvmf_set_config", 00:13:49.550 "params": { 00:13:49.550 "discovery_filter": "match_any", 00:13:49.550 "admin_cmd_passthru": { 00:13:49.550 "identify_ctrlr": false 00:13:49.550 }, 00:13:49.550 "dhchap_digests": [ 00:13:49.550 "sha256", 00:13:49.550 "sha384", 00:13:49.550 "sha512" 00:13:49.550 ], 00:13:49.550 "dhchap_dhgroups": [ 00:13:49.550 "null", 00:13:49.550 "ffdhe2048", 00:13:49.550 "ffdhe3072", 00:13:49.550 "ffdhe4096", 00:13:49.550 "ffdhe6144", 00:13:49.550 "ffdhe8192" 00:13:49.550 ] 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "nvmf_set_max_subsystems", 00:13:49.550 "params": { 00:13:49.550 "max_subsystems": 1024 00:13:49.550 } 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "method": "nvmf_set_crdt", 00:13:49.550 "params": { 00:13:49.550 "crdt1": 0, 00:13:49.550 "crdt2": 0, 00:13:49.550 "crdt3": 0 00:13:49.550 } 00:13:49.550 } 00:13:49.550 ] 00:13:49.550 }, 00:13:49.550 { 00:13:49.550 "subsystem": "iscsi", 00:13:49.550 "config": [ 00:13:49.550 { 00:13:49.550 "method": "iscsi_set_options", 00:13:49.550 "params": { 00:13:49.550 "node_base": "iqn.2016-06.io.spdk", 00:13:49.550 "max_sessions": 128, 00:13:49.550 "max_connections_per_session": 2, 00:13:49.550 "max_queue_depth": 64, 00:13:49.550 "default_time2wait": 2, 00:13:49.550 "default_time2retain": 20, 00:13:49.550 "first_burst_length": 8192, 00:13:49.550 "immediate_data": true, 00:13:49.550 "allow_duplicated_isid": false, 00:13:49.550 "error_recovery_level": 0, 00:13:49.550 "nop_timeout": 60, 00:13:49.550 "nop_in_interval": 30, 00:13:49.550 "disable_chap": false, 00:13:49.550 "require_chap": false, 00:13:49.550 "mutual_chap": false, 00:13:49.550 "chap_group": 0, 00:13:49.550 "max_large_datain_per_connection": 64, 00:13:49.550 "max_r2t_per_connection": 4, 00:13:49.550 "pdu_pool_size": 36864, 00:13:49.550 "immediate_data_pool_size": 16384, 00:13:49.550 "data_out_pool_size": 2048 00:13:49.550 } 00:13:49.550 } 00:13:49.550 ] 00:13:49.550 } 00:13:49.550 ] 00:13:49.550 }' 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70851 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70851 ']' 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70851 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70851 00:13:49.550 killing process with pid 70851 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70851' 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70851 00:13:49.550 09:42:17 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70851 00:13:50.934 [2024-11-07 09:42:18.174082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.934 [2024-11-07 09:42:18.202781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.934 [2024-11-07 09:42:18.202927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.934 [2024-11-07 09:42:18.210677] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.934 [2024-11-07 09:42:18.210738] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:50.934 [2024-11-07 09:42:18.210752] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:50.934 [2024-11-07 09:42:18.210783] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.934 [2024-11-07 09:42:18.210941] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70906 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70906 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70906 ']' 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:52.313 09:42:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:52.313 "subsystems": [ 00:13:52.313 { 00:13:52.313 "subsystem": "fsdev", 00:13:52.313 "config": [ 00:13:52.313 { 00:13:52.313 "method": "fsdev_set_opts", 00:13:52.313 "params": { 00:13:52.313 "fsdev_io_pool_size": 65535, 00:13:52.313 "fsdev_io_cache_size": 256 00:13:52.313 } 00:13:52.313 } 00:13:52.313 ] 00:13:52.313 }, 00:13:52.313 { 00:13:52.313 "subsystem": "keyring", 00:13:52.313 "config": [] 00:13:52.313 }, 00:13:52.313 { 00:13:52.313 "subsystem": "iobuf", 00:13:52.313 "config": [ 00:13:52.313 { 00:13:52.313 "method": "iobuf_set_options", 00:13:52.313 "params": { 00:13:52.313 "small_pool_count": 8192, 00:13:52.313 "large_pool_count": 1024, 00:13:52.313 "small_bufsize": 8192, 00:13:52.313 "large_bufsize": 135168, 00:13:52.313 "enable_numa": false 00:13:52.313 } 00:13:52.313 } 00:13:52.313 ] 00:13:52.313 }, 00:13:52.313 { 00:13:52.313 "subsystem": "sock", 00:13:52.313 "config": [ 00:13:52.313 { 00:13:52.313 "method": "sock_set_default_impl", 00:13:52.313 "params": { 00:13:52.313 "impl_name": "posix" 00:13:52.313 } 00:13:52.313 }, 00:13:52.313 { 00:13:52.313 "method": "sock_impl_set_options", 00:13:52.313 "params": { 00:13:52.313 "impl_name": "ssl", 00:13:52.313 "recv_buf_size": 4096, 00:13:52.313 "send_buf_size": 4096, 00:13:52.313 "enable_recv_pipe": true, 00:13:52.313 "enable_quickack": false, 00:13:52.313 "enable_placement_id": 0, 00:13:52.313 "enable_zerocopy_send_server": true, 00:13:52.313 "enable_zerocopy_send_client": false, 00:13:52.313 "zerocopy_threshold": 0, 00:13:52.313 "tls_version": 0, 00:13:52.313 "enable_ktls": false 00:13:52.313 } 00:13:52.313 }, 00:13:52.313 { 00:13:52.313 "method": "sock_impl_set_options", 00:13:52.313 "params": { 00:13:52.313 "impl_name": "posix", 00:13:52.313 "recv_buf_size": 2097152, 00:13:52.313 "send_buf_size": 2097152, 00:13:52.313 "enable_recv_pipe": true, 00:13:52.313 "enable_quickack": false, 00:13:52.314 "enable_placement_id": 0, 00:13:52.314 "enable_zerocopy_send_server": true, 00:13:52.314 "enable_zerocopy_send_client": false, 00:13:52.314 "zerocopy_threshold": 0, 00:13:52.314 "tls_version": 0, 00:13:52.314 "enable_ktls": false 00:13:52.314 } 00:13:52.314 } 00:13:52.314 ] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "vmd", 00:13:52.314 "config": [] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "accel", 00:13:52.314 "config": [ 00:13:52.314 { 00:13:52.314 "method": "accel_set_options", 00:13:52.314 "params": { 00:13:52.314 "small_cache_size": 128, 00:13:52.314 "large_cache_size": 16, 00:13:52.314 "task_count": 2048, 00:13:52.314 "sequence_count": 2048, 00:13:52.314 "buf_count": 2048 00:13:52.314 } 00:13:52.314 } 00:13:52.314 ] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "bdev", 00:13:52.314 "config": [ 00:13:52.314 { 00:13:52.314 "method": "bdev_set_options", 00:13:52.314 "params": { 00:13:52.314 "bdev_io_pool_size": 65535, 00:13:52.314 "bdev_io_cache_size": 256, 00:13:52.314 "bdev_auto_examine": true, 00:13:52.314 "iobuf_small_cache_size": 128, 00:13:52.314 "iobuf_large_cache_size": 16 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "bdev_raid_set_options", 00:13:52.314 "params": { 00:13:52.314 "process_window_size_kb": 1024, 00:13:52.314 "process_max_bandwidth_mb_sec": 0 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "bdev_iscsi_set_options", 00:13:52.314 "params": { 00:13:52.314 "timeout_sec": 30 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "bdev_nvme_set_options", 00:13:52.314 "params": { 00:13:52.314 "action_on_timeout": "none", 00:13:52.314 "timeout_us": 0, 00:13:52.314 "timeout_admin_us": 0, 00:13:52.314 "keep_alive_timeout_ms": 10000, 00:13:52.314 "arbitration_burst": 0, 00:13:52.314 "low_priority_weight": 0, 00:13:52.314 "medium_priority_weight": 0, 00:13:52.314 "high_priority_weight": 0, 00:13:52.314 "nvme_adminq_poll_period_us": 10000, 00:13:52.314 "nvme_ioq_poll_period_us": 0, 00:13:52.314 "io_queue_requests": 0, 00:13:52.314 "delay_cmd_submit": true, 00:13:52.314 "transport_retry_count": 4, 00:13:52.314 "bdev_retry_count": 3, 00:13:52.314 "transport_ack_timeout": 0, 00:13:52.314 "ctrlr_loss_timeout_sec": 0, 00:13:52.314 "reconnect_delay_sec": 0, 00:13:52.314 "fast_io_fail_timeout_sec": 0, 00:13:52.314 "disable_auto_failback": false, 00:13:52.314 "generate_uuids": false, 00:13:52.314 "transport_tos": 0, 00:13:52.314 "nvme_error_stat": false, 00:13:52.314 "rdma_srq_size": 0, 00:13:52.314 "io_path_stat": false, 00:13:52.314 "allow_accel_sequence": false, 00:13:52.314 "rdma_max_cq_size": 0, 00:13:52.314 "rdma_cm_event_timeout_ms": 0, 00:13:52.314 "dhchap_digests": [ 00:13:52.314 "sha256", 00:13:52.314 "sha384", 00:13:52.314 "sha512" 00:13:52.314 ], 00:13:52.314 "dhchap_dhgroups": [ 00:13:52.314 "null", 00:13:52.314 "ffdhe2048", 00:13:52.314 "ffdhe3072", 00:13:52.314 "ffdhe4096", 00:13:52.314 "ffdhe6144", 00:13:52.314 "ffdhe8192" 00:13:52.314 ] 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "bdev_nvme_set_hotplug", 00:13:52.314 "params": { 00:13:52.314 "period_us": 100000, 00:13:52.314 "enable": false 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "bdev_malloc_create", 00:13:52.314 "params": { 00:13:52.314 "name": "malloc0", 00:13:52.314 "num_blocks": 8192, 00:13:52.314 "block_size": 4096, 00:13:52.314 "physical_block_size": 4096, 00:13:52.314 "uuid": "374a5ec4-c7ba-41d3-b496-670e65b5ed2f", 00:13:52.314 "optimal_io_boundary": 0, 00:13:52.314 "md_size": 0, 00:13:52.314 "dif_type": 0, 00:13:52.314 "dif_is_head_of_md": false, 00:13:52.314 "dif_pi_format": 0 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "bdev_wait_for_examine" 00:13:52.314 } 00:13:52.314 ] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "scsi", 00:13:52.314 "config": null 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "scheduler", 00:13:52.314 "config": [ 00:13:52.314 { 00:13:52.314 "method": "framework_set_scheduler", 00:13:52.314 "params": { 00:13:52.314 "name": "static" 00:13:52.314 } 00:13:52.314 } 00:13:52.314 ] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "vhost_scsi", 00:13:52.314 "config": [] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "vhost_blk", 00:13:52.314 "config": [] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "ublk", 00:13:52.314 "config": [ 00:13:52.314 { 00:13:52.314 "method": "ublk_create_target", 00:13:52.314 "params": { 00:13:52.314 "cpumask": "1" 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "ublk_start_disk", 00:13:52.314 "params": { 00:13:52.314 "bdev_name": "malloc0", 00:13:52.314 "ublk_id": 0, 00:13:52.314 "num_queues": 1, 00:13:52.314 "queue_depth": 128 00:13:52.314 } 00:13:52.314 } 00:13:52.314 ] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "nbd", 00:13:52.314 "config": [] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "nvmf", 00:13:52.314 "config": [ 00:13:52.314 { 00:13:52.314 "method": "nvmf_set_config", 00:13:52.314 "params": { 00:13:52.314 "discovery_filter": "match_any", 00:13:52.314 "admin_cmd_passthru": { 00:13:52.314 "identify_ctrlr": false 00:13:52.314 }, 00:13:52.314 "dhchap_digests": [ 00:13:52.314 "sha256", 00:13:52.314 "sha384", 00:13:52.314 "sha512" 00:13:52.314 ], 00:13:52.314 "dhchap_dhgroups": [ 00:13:52.314 "null", 00:13:52.314 "ffdhe2048", 00:13:52.314 "ffdhe3072", 00:13:52.314 "ffdhe4096", 00:13:52.314 "ffdhe6144", 00:13:52.314 "ffdhe8192" 00:13:52.314 ] 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "nvmf_set_max_subsystems", 00:13:52.314 "params": { 00:13:52.314 "max_subsystems": 1024 00:13:52.314 } 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "method": "nvmf_set_crdt", 00:13:52.314 "params": { 00:13:52.314 "crdt1": 0, 00:13:52.314 "crdt2": 0, 00:13:52.314 "crdt3": 0 00:13:52.314 } 00:13:52.314 } 00:13:52.314 ] 00:13:52.314 }, 00:13:52.314 { 00:13:52.314 "subsystem": "iscsi", 00:13:52.314 "config": [ 00:13:52.314 { 00:13:52.314 "method": "iscsi_set_options", 00:13:52.314 "params": { 00:13:52.314 "node_base": "iqn.2016-06.io.spdk", 00:13:52.315 "max_sessions": 128, 00:13:52.315 "max_connections_per_session": 2, 00:13:52.315 "max_queue_depth": 64, 00:13:52.315 "default_time2wait": 2, 00:13:52.315 "default_time2retain": 20, 00:13:52.315 "first_burst_length": 8192, 00:13:52.315 "immediate_data": true, 00:13:52.315 "allow_duplicated_isid": false, 00:13:52.315 "error_recovery_level": 0, 00:13:52.315 "nop_timeout": 60, 00:13:52.315 "nop_in_interval": 30, 00:13:52.315 "disable_chap": false, 00:13:52.315 "require_chap": false, 00:13:52.315 "mutual_chap": false, 00:13:52.315 "chap_group": 0, 00:13:52.315 "max_large_datain_per_connection": 64, 00:13:52.315 "max_r2t_per_connection": 4, 00:13:52.315 "pdu_pool_size": 36864, 00:13:52.315 "immediate_data_pool_size": 16384, 00:13:52.315 "data_out_pool_size": 2048 00:13:52.315 } 00:13:52.315 } 00:13:52.315 ] 00:13:52.315 } 00:13:52.315 ] 00:13:52.315 }' 00:13:52.315 [2024-11-07 09:42:19.648403] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:52.315 [2024-11-07 09:42:19.648668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70906 ] 00:13:52.315 [2024-11-07 09:42:19.805015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.315 [2024-11-07 09:42:19.889304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.883 [2024-11-07 09:42:20.527644] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:52.883 [2024-11-07 09:42:20.528278] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.883 [2024-11-07 09:42:20.535737] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:52.883 [2024-11-07 09:42:20.535795] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:52.883 [2024-11-07 09:42:20.535803] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:52.883 [2024-11-07 09:42:20.535809] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.883 [2024-11-07 09:42:20.544700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.883 [2024-11-07 09:42:20.544718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:52.883 [2024-11-07 09:42:20.551650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:52.883 [2024-11-07 09:42:20.551722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.140 [2024-11-07 09:42:20.568644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.140 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.140 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:13:53.140 09:42:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:53.140 09:42:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:53.140 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.140 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70906 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70906 ']' 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70906 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70906 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:53.141 killing process with pid 70906 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70906' 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70906 00:13:53.141 09:42:20 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70906 00:13:54.076 [2024-11-07 09:42:21.652908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.076 [2024-11-07 09:42:21.683711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.076 [2024-11-07 09:42:21.683807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.076 [2024-11-07 09:42:21.690654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.076 [2024-11-07 09:42:21.690693] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:54.076 [2024-11-07 09:42:21.690699] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:54.076 [2024-11-07 09:42:21.690718] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:54.076 [2024-11-07 09:42:21.690824] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:55.451 09:42:22 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:55.451 00:13:55.451 real 0m7.295s 00:13:55.451 user 0m5.095s 00:13:55.451 sys 0m2.844s 00:13:55.451 09:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:55.451 ************************************ 00:13:55.451 END TEST test_save_ublk_config 00:13:55.451 ************************************ 00:13:55.451 09:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:55.451 09:42:22 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70977 00:13:55.451 09:42:22 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:55.451 09:42:22 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70977 00:13:55.451 09:42:22 ublk -- common/autotest_common.sh@833 -- # '[' -z 70977 ']' 00:13:55.451 09:42:22 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.451 09:42:22 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:55.451 09:42:22 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:55.451 09:42:22 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.451 09:42:22 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:55.451 09:42:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.451 [2024-11-07 09:42:22.973146] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:55.451 [2024-11-07 09:42:22.973622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70977 ] 00:13:55.710 [2024-11-07 09:42:23.129881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:55.710 [2024-11-07 09:42:23.216620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.710 [2024-11-07 09:42:23.216667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.280 09:42:23 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:56.280 09:42:23 ublk -- common/autotest_common.sh@866 -- # return 0 00:13:56.280 09:42:23 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:56.280 09:42:23 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:56.280 09:42:23 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:56.280 09:42:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.280 ************************************ 00:13:56.280 START TEST test_create_ublk 00:13:56.280 ************************************ 00:13:56.280 09:42:23 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:13:56.280 09:42:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:56.280 09:42:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.280 09:42:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.280 [2024-11-07 09:42:23.852665] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:56.280 [2024-11-07 09:42:23.855324] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:56.280 09:42:23 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.280 09:42:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:56.280 09:42:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:56.280 09:42:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.280 09:42:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.540 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:56.540 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.540 [2024-11-07 09:42:24.114876] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:56.540 [2024-11-07 09:42:24.115391] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:56.540 [2024-11-07 09:42:24.115415] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:56.540 [2024-11-07 09:42:24.115425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.540 [2024-11-07 09:42:24.124102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.540 [2024-11-07 09:42:24.124140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.540 [2024-11-07 09:42:24.130691] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.540 [2024-11-07 09:42:24.146741] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:56.540 [2024-11-07 09:42:24.183709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.540 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:56.540 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:56.540 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.540 09:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:56.800 { 00:13:56.800 "ublk_device": "/dev/ublkb0", 00:13:56.800 "id": 0, 00:13:56.800 "queue_depth": 512, 00:13:56.800 "num_queues": 4, 00:13:56.800 "bdev_name": "Malloc0" 00:13:56.800 } 00:13:56.800 ]' 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:56.800 09:42:24 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:57.061 fio: verification read phase will never start because write phase uses all of runtime 00:13:57.061 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:57.061 fio-3.35 00:13:57.061 Starting 1 process 00:14:07.037 00:14:07.037 fio_test: (groupid=0, jobs=1): err= 0: pid=71023: Thu Nov 7 09:42:34 2024 00:14:07.037 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(621MiB/10001msec); 0 zone resets 00:14:07.037 clat (usec): min=37, max=3906, avg=62.21, stdev=80.72 00:14:07.037 lat (usec): min=37, max=3906, avg=62.59, stdev=80.73 00:14:07.037 clat percentiles (usec): 00:14:07.037 | 1.00th=[ 44], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 55], 00:14:07.037 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:14:07.037 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 68], 95.00th=[ 73], 00:14:07.037 | 99.00th=[ 87], 99.50th=[ 100], 99.90th=[ 1303], 99.95th=[ 2311], 00:14:07.037 | 99.99th=[ 3359] 00:14:07.037 bw ( KiB/s): min=55856, max=66299, per=100.00%, avg=63619.53, stdev=2448.82, samples=19 00:14:07.037 iops : min=13964, max=16574, avg=15904.84, stdev=612.16, samples=19 00:14:07.037 lat (usec) : 50=4.62%, 100=94.88%, 250=0.30%, 500=0.06%, 750=0.01% 00:14:07.037 lat (usec) : 1000=0.01% 00:14:07.037 lat (msec) : 2=0.05%, 4=0.07% 00:14:07.037 cpu : usr=2.35%, sys=10.90%, ctx=159052, majf=0, minf=796 00:14:07.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.037 issued rwts: total=0,159052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.037 00:14:07.037 Run status group 0 (all jobs): 00:14:07.037 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=621MiB (651MB), run=10001-10001msec 00:14:07.037 00:14:07.037 Disk stats (read/write): 00:14:07.037 ublkb0: ios=0/157390, merge=0/0, ticks=0/8639, in_queue=8640, util=99.06% 00:14:07.037 09:42:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.037 [2024-11-07 09:42:34.602197] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.037 [2024-11-07 09:42:34.639297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.037 [2024-11-07 09:42:34.640212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.037 [2024-11-07 09:42:34.645656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.037 [2024-11-07 09:42:34.645918] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:07.037 [2024-11-07 09:42:34.645932] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.037 09:42:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.037 [2024-11-07 09:42:34.661709] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:07.037 request: 00:14:07.037 { 00:14:07.037 "ublk_id": 0, 00:14:07.037 "method": "ublk_stop_disk", 00:14:07.037 "req_id": 1 00:14:07.037 } 00:14:07.037 Got JSON-RPC error response 00:14:07.037 response: 00:14:07.037 { 00:14:07.037 "code": -19, 00:14:07.037 "message": "No such device" 00:14:07.037 } 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:07.037 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.038 09:42:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.038 [2024-11-07 09:42:34.677715] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.038 [2024-11-07 09:42:34.685644] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.038 [2024-11-07 09:42:34.685679] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.038 09:42:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.038 09:42:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 09:42:35 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:07.620 09:42:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:07.620 00:14:07.620 real 0m11.325s 00:14:07.620 user 0m0.529s 00:14:07.620 sys 0m1.177s 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.620 09:42:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 ************************************ 00:14:07.620 END TEST test_create_ublk 00:14:07.620 ************************************ 00:14:07.620 09:42:35 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:07.620 09:42:35 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:07.620 09:42:35 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.620 09:42:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 ************************************ 00:14:07.620 START TEST test_create_multi_ublk 00:14:07.620 ************************************ 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.620 [2024-11-07 09:42:35.220640] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:07.620 [2024-11-07 09:42:35.222311] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.620 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.879 [2024-11-07 09:42:35.472768] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:07.879 [2024-11-07 09:42:35.473103] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:07.879 [2024-11-07 09:42:35.473116] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:07.879 [2024-11-07 09:42:35.473125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:07.879 [2024-11-07 09:42:35.484669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:07.879 [2024-11-07 09:42:35.484692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:07.879 [2024-11-07 09:42:35.496659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:07.879 [2024-11-07 09:42:35.497193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:07.879 [2024-11-07 09:42:35.505766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.879 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.137 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.137 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:08.137 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:08.137 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.137 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.137 [2024-11-07 09:42:35.767756] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:08.137 [2024-11-07 09:42:35.768076] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:08.137 [2024-11-07 09:42:35.768090] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:08.137 [2024-11-07 09:42:35.768095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.137 [2024-11-07 09:42:35.779668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.137 [2024-11-07 09:42:35.779686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.137 [2024-11-07 09:42:35.791666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.137 [2024-11-07 09:42:35.792183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:08.394 [2024-11-07 09:42:35.815661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.394 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.394 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:08.394 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.394 09:42:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:08.394 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.394 09:42:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.394 [2024-11-07 09:42:36.022753] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:08.394 [2024-11-07 09:42:36.023089] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:08.394 [2024-11-07 09:42:36.023102] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:08.394 [2024-11-07 09:42:36.023109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.394 [2024-11-07 09:42:36.030667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.394 [2024-11-07 09:42:36.030689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.394 [2024-11-07 09:42:36.038655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.394 [2024-11-07 09:42:36.039204] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:08.394 [2024-11-07 09:42:36.047683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.394 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.652 [2024-11-07 09:42:36.222759] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:08.652 [2024-11-07 09:42:36.223086] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:08.652 [2024-11-07 09:42:36.223100] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:08.652 [2024-11-07 09:42:36.223105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.652 [2024-11-07 09:42:36.230669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.652 [2024-11-07 09:42:36.230686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.652 [2024-11-07 09:42:36.238658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.652 [2024-11-07 09:42:36.239194] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:08.652 [2024-11-07 09:42:36.247694] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:08.652 { 00:14:08.652 "ublk_device": "/dev/ublkb0", 00:14:08.652 "id": 0, 00:14:08.652 "queue_depth": 512, 00:14:08.652 "num_queues": 4, 00:14:08.652 "bdev_name": "Malloc0" 00:14:08.652 }, 00:14:08.652 { 00:14:08.652 "ublk_device": "/dev/ublkb1", 00:14:08.652 "id": 1, 00:14:08.652 "queue_depth": 512, 00:14:08.652 "num_queues": 4, 00:14:08.652 "bdev_name": "Malloc1" 00:14:08.652 }, 00:14:08.652 { 00:14:08.652 "ublk_device": "/dev/ublkb2", 00:14:08.652 "id": 2, 00:14:08.652 "queue_depth": 512, 00:14:08.652 "num_queues": 4, 00:14:08.652 "bdev_name": "Malloc2" 00:14:08.652 }, 00:14:08.652 { 00:14:08.652 "ublk_device": "/dev/ublkb3", 00:14:08.652 "id": 3, 00:14:08.652 "queue_depth": 512, 00:14:08.652 "num_queues": 4, 00:14:08.652 "bdev_name": "Malloc3" 00:14:08.652 } 00:14:08.652 ]' 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:08.652 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.910 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:09.168 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:09.426 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:09.426 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:09.426 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:09.426 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:09.426 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:09.426 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 [2024-11-07 09:42:36.918729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.427 [2024-11-07 09:42:36.958690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.427 [2024-11-07 09:42:36.959515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.427 [2024-11-07 09:42:36.967689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.427 [2024-11-07 09:42:36.967921] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:09.427 [2024-11-07 09:42:36.967935] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.427 09:42:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 [2024-11-07 09:42:36.982723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.427 [2024-11-07 09:42:37.022650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.427 [2024-11-07 09:42:37.023472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.427 [2024-11-07 09:42:37.031688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.427 [2024-11-07 09:42:37.031928] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:09.427 [2024-11-07 09:42:37.031942] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.427 [2024-11-07 09:42:37.046737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.427 [2024-11-07 09:42:37.086236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.427 [2024-11-07 09:42:37.087193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.427 [2024-11-07 09:42:37.094654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.427 [2024-11-07 09:42:37.094912] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:09.427 [2024-11-07 09:42:37.094927] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.427 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.685 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:09.685 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.685 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.685 [2024-11-07 09:42:37.102715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.685 [2024-11-07 09:42:37.142679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.685 [2024-11-07 09:42:37.143350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.685 [2024-11-07 09:42:37.151681] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.685 [2024-11-07 09:42:37.151922] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:09.685 [2024-11-07 09:42:37.151934] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:09.685 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.685 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:09.685 [2024-11-07 09:42:37.342696] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:09.685 [2024-11-07 09:42:37.350645] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.685 [2024-11-07 09:42:37.350677] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:09.944 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:09.944 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.944 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:09.944 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.944 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.202 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.202 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:10.202 09:42:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:10.202 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.202 09:42:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.461 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.461 09:42:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:10.461 09:42:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:10.461 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.461 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.720 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.720 09:42:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:10.720 09:42:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:10.720 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.720 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:10.978 00:14:10.978 real 0m3.392s 00:14:10.978 user 0m0.814s 00:14:10.978 sys 0m0.148s 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.978 ************************************ 00:14:10.978 END TEST test_create_multi_ublk 00:14:10.978 ************************************ 00:14:10.978 09:42:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.978 09:42:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:10.978 09:42:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:10.978 09:42:38 ublk -- ublk/ublk.sh@130 -- # killprocess 70977 00:14:10.978 09:42:38 ublk -- common/autotest_common.sh@952 -- # '[' -z 70977 ']' 00:14:10.978 09:42:38 ublk -- common/autotest_common.sh@956 -- # kill -0 70977 00:14:10.978 09:42:38 ublk -- common/autotest_common.sh@957 -- # uname 00:14:10.978 09:42:38 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:10.978 09:42:38 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70977 00:14:11.236 09:42:38 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:11.236 09:42:38 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:11.236 09:42:38 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70977' 00:14:11.236 killing process with pid 70977 00:14:11.236 09:42:38 ublk -- common/autotest_common.sh@971 -- # kill 70977 00:14:11.237 09:42:38 ublk -- common/autotest_common.sh@976 -- # wait 70977 00:14:11.803 [2024-11-07 09:42:39.241588] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:11.803 [2024-11-07 09:42:39.241646] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:12.381 00:14:12.381 real 0m24.541s 00:14:12.381 user 0m35.597s 00:14:12.381 sys 0m8.959s 00:14:12.381 09:42:39 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:12.381 ************************************ 00:14:12.381 END TEST ublk 00:14:12.381 ************************************ 00:14:12.381 09:42:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.381 09:42:39 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:12.381 09:42:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:12.381 09:42:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:12.381 09:42:39 -- common/autotest_common.sh@10 -- # set +x 00:14:12.381 ************************************ 00:14:12.381 START TEST ublk_recovery 00:14:12.381 ************************************ 00:14:12.381 09:42:39 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:12.381 * Looking for test storage... 00:14:12.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:12.381 09:42:40 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:12.381 09:42:40 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:12.381 09:42:40 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.641 09:42:40 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:12.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.641 --rc genhtml_branch_coverage=1 00:14:12.641 --rc genhtml_function_coverage=1 00:14:12.641 --rc genhtml_legend=1 00:14:12.641 --rc geninfo_all_blocks=1 00:14:12.641 --rc geninfo_unexecuted_blocks=1 00:14:12.641 00:14:12.641 ' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:12.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.641 --rc genhtml_branch_coverage=1 00:14:12.641 --rc genhtml_function_coverage=1 00:14:12.641 --rc genhtml_legend=1 00:14:12.641 --rc geninfo_all_blocks=1 00:14:12.641 --rc geninfo_unexecuted_blocks=1 00:14:12.641 00:14:12.641 ' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:12.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.641 --rc genhtml_branch_coverage=1 00:14:12.641 --rc genhtml_function_coverage=1 00:14:12.641 --rc genhtml_legend=1 00:14:12.641 --rc geninfo_all_blocks=1 00:14:12.641 --rc geninfo_unexecuted_blocks=1 00:14:12.641 00:14:12.641 ' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:12.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.641 --rc genhtml_branch_coverage=1 00:14:12.641 --rc genhtml_function_coverage=1 00:14:12.641 --rc genhtml_legend=1 00:14:12.641 --rc geninfo_all_blocks=1 00:14:12.641 --rc geninfo_unexecuted_blocks=1 00:14:12.641 00:14:12.641 ' 00:14:12.641 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:12.641 09:42:40 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:12.641 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:12.641 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71368 00:14:12.641 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.641 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71368 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71368 ']' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:12.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.641 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:12.641 09:42:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.641 [2024-11-07 09:42:40.203122] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:14:12.641 [2024-11-07 09:42:40.203238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71368 ] 00:14:12.899 [2024-11-07 09:42:40.357621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.899 [2024-11-07 09:42:40.451187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.899 [2024-11-07 09:42:40.451261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:13.462 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.462 [2024-11-07 09:42:40.991649] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:13.462 [2024-11-07 09:42:40.993351] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.462 09:42:40 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.462 09:42:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.462 malloc0 00:14:13.462 09:42:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.462 09:42:41 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:13.462 09:42:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.462 09:42:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.462 [2024-11-07 09:42:41.087761] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:13.462 [2024-11-07 09:42:41.087849] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:13.462 [2024-11-07 09:42:41.087858] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:13.462 [2024-11-07 09:42:41.087866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:13.462 [2024-11-07 09:42:41.096755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:13.462 [2024-11-07 09:42:41.096772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:13.462 [2024-11-07 09:42:41.103654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:13.462 [2024-11-07 09:42:41.103781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:13.462 [2024-11-07 09:42:41.120673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:13.462 1 00:14:13.462 09:42:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.462 09:42:41 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:14.834 09:42:42 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71403 00:14:14.834 09:42:42 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:14.834 09:42:42 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:14.834 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.834 fio-3.35 00:14:14.834 Starting 1 process 00:14:20.098 09:42:47 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71368 00:14:20.098 09:42:47 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:25.392 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71368 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:25.392 09:42:52 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71515 00:14:25.392 09:42:52 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.392 09:42:52 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71515 00:14:25.392 09:42:52 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:25.392 09:42:52 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71515 ']' 00:14:25.392 09:42:52 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.392 09:42:52 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.392 09:42:52 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.392 09:42:52 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.393 09:42:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.393 [2024-11-07 09:42:52.221226] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:14:25.393 [2024-11-07 09:42:52.221348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71515 ] 00:14:25.393 [2024-11-07 09:42:52.386557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:25.393 [2024-11-07 09:42:52.488053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.393 [2024-11-07 09:42:52.488194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:25.655 09:42:53 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.655 [2024-11-07 09:42:53.088659] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:25.655 [2024-11-07 09:42:53.090583] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.655 09:42:53 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.655 malloc0 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.655 09:42:53 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.655 [2024-11-07 09:42:53.193797] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:25.655 [2024-11-07 09:42:53.193837] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:25.655 [2024-11-07 09:42:53.193851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:25.655 [2024-11-07 09:42:53.202709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:25.655 [2024-11-07 09:42:53.202736] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:25.655 1 00:14:25.655 09:42:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.655 09:42:53 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71403 00:14:26.599 [2024-11-07 09:42:54.202775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:26.599 [2024-11-07 09:42:54.208670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:26.599 [2024-11-07 09:42:54.208690] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:27.544 [2024-11-07 09:42:55.208723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:27.805 [2024-11-07 09:42:55.218661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:27.805 [2024-11-07 09:42:55.218681] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:28.749 [2024-11-07 09:42:56.218710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:28.749 [2024-11-07 09:42:56.222664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:28.749 [2024-11-07 09:42:56.222677] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:28.749 [2024-11-07 09:42:56.222688] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:28.749 [2024-11-07 09:42:56.222772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:50.704 [2024-11-07 09:43:17.496661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:50.704 [2024-11-07 09:43:17.500506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:50.704 [2024-11-07 09:43:17.505833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:50.704 [2024-11-07 09:43:17.505851] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:17.238 00:15:17.238 fio_test: (groupid=0, jobs=1): err= 0: pid=71406: Thu Nov 7 09:43:42 2024 00:15:17.238 read: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(3348MiB/60003msec) 00:15:17.238 slat (nsec): min=1097, max=215162, avg=5133.68, stdev=1571.45 00:15:17.238 clat (usec): min=881, max=30381k, avg=4542.02, stdev=268566.97 00:15:17.238 lat (usec): min=885, max=30381k, avg=4547.15, stdev=268566.97 00:15:17.238 clat percentiles (usec): 00:15:17.238 | 1.00th=[ 1778], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:15:17.238 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2057], 00:15:17.238 | 70.00th=[ 2089], 80.00th=[ 2114], 90.00th=[ 2212], 95.00th=[ 3032], 00:15:17.238 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 6980], 99.95th=[ 7242], 00:15:17.238 | 99.99th=[13173] 00:15:17.238 bw ( KiB/s): min=41440, max=121744, per=100.00%, avg=114407.46, stdev=14241.87, samples=59 00:15:17.238 iops : min=10360, max=30436, avg=28601.86, stdev=3560.47, samples=59 00:15:17.238 write: IOPS=14.3k, BW=55.7MiB/s (58.4MB/s)(3344MiB/60003msec); 0 zone resets 00:15:17.238 slat (nsec): min=1107, max=1105.4k, avg=5184.28, stdev=1963.45 00:15:17.238 clat (usec): min=741, max=30381k, avg=4413.02, stdev=256437.29 00:15:17.238 lat (usec): min=747, max=30381k, avg=4418.21, stdev=256437.28 00:15:17.238 clat percentiles (usec): 00:15:17.238 | 1.00th=[ 1811], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2073], 00:15:17.238 | 30.00th=[ 2089], 40.00th=[ 2114], 50.00th=[ 2114], 60.00th=[ 2147], 00:15:17.238 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 2999], 00:15:17.238 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 7046], 99.95th=[ 7373], 00:15:17.238 | 99.99th=[13435] 00:15:17.238 bw ( KiB/s): min=42016, max=120840, per=100.00%, avg=114250.31, stdev=14048.67, samples=59 00:15:17.238 iops : min=10504, max=30210, avg=28562.58, stdev=3512.17, samples=59 00:15:17.238 lat (usec) : 750=0.01%, 1000=0.01% 00:15:17.238 lat (msec) : 2=16.66%, 4=80.57%, 10=2.75%, 20=0.02%, >=2000=0.01% 00:15:17.238 cpu : usr=3.07%, sys=15.34%, ctx=56889, majf=0, minf=13 00:15:17.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:17.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.238 issued rwts: total=857146,855964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.238 00:15:17.238 Run status group 0 (all jobs): 00:15:17.238 READ: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=3348MiB (3511MB), run=60003-60003msec 00:15:17.238 WRITE: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=3344MiB (3506MB), run=60003-60003msec 00:15:17.238 00:15:17.238 Disk stats (read/write): 00:15:17.238 ublkb1: ios=853896/852828, merge=0/0, ticks=3838433/3651872, in_queue=7490306, util=99.91% 00:15:17.238 09:43:42 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:17.238 [2024-11-07 09:43:42.387360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.238 [2024-11-07 09:43:42.422670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.238 [2024-11-07 09:43:42.422828] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.238 [2024-11-07 09:43:42.429653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.238 [2024-11-07 09:43:42.429764] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:17.238 [2024-11-07 09:43:42.429772] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.238 09:43:42 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:17.238 [2024-11-07 09:43:42.433740] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:17.238 [2024-11-07 09:43:42.439267] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:17.238 [2024-11-07 09:43:42.439299] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.238 09:43:42 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:17.238 09:43:42 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:17.238 09:43:42 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71515 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71515 ']' 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71515 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71515 00:15:17.238 killing process with pid 71515 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:17.238 09:43:42 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71515' 00:15:17.239 09:43:42 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71515 00:15:17.239 09:43:42 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71515 00:15:17.239 [2024-11-07 09:43:43.576583] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:17.239 [2024-11-07 09:43:43.576627] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:17.500 00:15:17.500 real 1m4.976s 00:15:17.500 user 1m46.955s 00:15:17.500 sys 0m23.161s 00:15:17.500 09:43:44 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:17.500 ************************************ 00:15:17.500 END TEST ublk_recovery 00:15:17.500 ************************************ 00:15:17.500 09:43:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:17.500 09:43:45 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:17.500 09:43:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.500 09:43:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.500 09:43:45 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:17.500 09:43:45 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:17.500 09:43:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:17.500 09:43:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:17.500 09:43:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.500 ************************************ 00:15:17.500 START TEST ftl 00:15:17.500 ************************************ 00:15:17.500 09:43:45 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:17.500 * Looking for test storage... 00:15:17.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.500 09:43:45 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.500 09:43:45 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.500 09:43:45 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.763 09:43:45 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.763 09:43:45 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.763 09:43:45 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.763 09:43:45 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.763 09:43:45 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.763 09:43:45 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.763 09:43:45 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:17.763 09:43:45 ftl -- scripts/common.sh@345 -- # : 1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.763 09:43:45 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.763 09:43:45 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@353 -- # local d=1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.763 09:43:45 ftl -- scripts/common.sh@355 -- # echo 1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.763 09:43:45 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@353 -- # local d=2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.763 09:43:45 ftl -- scripts/common.sh@355 -- # echo 2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.763 09:43:45 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.763 09:43:45 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.763 09:43:45 ftl -- scripts/common.sh@368 -- # return 0 00:15:17.763 09:43:45 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.763 09:43:45 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.763 --rc genhtml_branch_coverage=1 00:15:17.763 --rc genhtml_function_coverage=1 00:15:17.763 --rc genhtml_legend=1 00:15:17.763 --rc geninfo_all_blocks=1 00:15:17.763 --rc geninfo_unexecuted_blocks=1 00:15:17.763 00:15:17.763 ' 00:15:17.763 09:43:45 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.763 --rc genhtml_branch_coverage=1 00:15:17.763 --rc genhtml_function_coverage=1 00:15:17.763 --rc genhtml_legend=1 00:15:17.763 --rc geninfo_all_blocks=1 00:15:17.763 --rc geninfo_unexecuted_blocks=1 00:15:17.763 00:15:17.763 ' 00:15:17.763 09:43:45 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.763 --rc genhtml_branch_coverage=1 00:15:17.763 --rc genhtml_function_coverage=1 00:15:17.763 --rc genhtml_legend=1 00:15:17.763 --rc geninfo_all_blocks=1 00:15:17.763 --rc geninfo_unexecuted_blocks=1 00:15:17.763 00:15:17.763 ' 00:15:17.763 09:43:45 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.763 --rc genhtml_branch_coverage=1 00:15:17.763 --rc genhtml_function_coverage=1 00:15:17.763 --rc genhtml_legend=1 00:15:17.763 --rc geninfo_all_blocks=1 00:15:17.763 --rc geninfo_unexecuted_blocks=1 00:15:17.763 00:15:17.763 ' 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:17.763 09:43:45 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:17.763 09:43:45 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.763 09:43:45 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:17.763 09:43:45 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:17.763 09:43:45 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:17.763 09:43:45 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.763 09:43:45 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:17.763 09:43:45 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:17.763 09:43:45 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.763 09:43:45 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.763 09:43:45 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:17.763 09:43:45 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:17.763 09:43:45 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:17.763 09:43:45 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:17.763 09:43:45 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:17.763 09:43:45 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:17.763 09:43:45 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.763 09:43:45 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.763 09:43:45 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:17.763 09:43:45 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:17.763 09:43:45 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:17.763 09:43:45 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:17.763 09:43:45 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:17.763 09:43:45 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:17.763 09:43:45 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:17.763 09:43:45 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:17.763 09:43:45 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:17.763 09:43:45 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:17.763 09:43:45 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.025 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.025 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.025 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.025 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.284 09:43:45 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72325 00:15:18.284 09:43:45 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72325 00:15:18.284 09:43:45 ftl -- common/autotest_common.sh@833 -- # '[' -z 72325 ']' 00:15:18.284 09:43:45 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:18.284 09:43:45 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.284 09:43:45 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:18.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.284 09:43:45 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.284 09:43:45 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:18.284 09:43:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.284 [2024-11-07 09:43:45.784963] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:15:18.284 [2024-11-07 09:43:45.785108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72325 ] 00:15:18.284 [2024-11-07 09:43:45.945545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.542 [2024-11-07 09:43:46.023784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.109 09:43:46 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:19.109 09:43:46 ftl -- common/autotest_common.sh@866 -- # return 0 00:15:19.109 09:43:46 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:19.368 09:43:46 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:19.935 09:43:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:19.935 09:43:47 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:20.502 09:43:47 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:20.503 09:43:47 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:20.503 09:43:47 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@50 -- # break 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:20.503 09:43:48 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:20.761 09:43:48 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:20.761 09:43:48 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:20.761 09:43:48 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:20.761 09:43:48 ftl -- ftl/ftl.sh@63 -- # break 00:15:20.761 09:43:48 ftl -- ftl/ftl.sh@66 -- # killprocess 72325 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@952 -- # '[' -z 72325 ']' 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@956 -- # kill -0 72325 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@957 -- # uname 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72325 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:20.761 09:43:48 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:20.761 killing process with pid 72325 00:15:20.762 09:43:48 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72325' 00:15:20.762 09:43:48 ftl -- common/autotest_common.sh@971 -- # kill 72325 00:15:20.762 09:43:48 ftl -- common/autotest_common.sh@976 -- # wait 72325 00:15:22.146 09:43:49 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:22.146 09:43:49 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:22.146 09:43:49 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:22.146 09:43:49 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.146 09:43:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:22.146 ************************************ 00:15:22.146 START TEST ftl_fio_basic 00:15:22.146 ************************************ 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:22.146 * Looking for test storage... 00:15:22.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:22.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.146 --rc genhtml_branch_coverage=1 00:15:22.146 --rc genhtml_function_coverage=1 00:15:22.146 --rc genhtml_legend=1 00:15:22.146 --rc geninfo_all_blocks=1 00:15:22.146 --rc geninfo_unexecuted_blocks=1 00:15:22.146 00:15:22.146 ' 00:15:22.146 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:22.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.146 --rc genhtml_branch_coverage=1 00:15:22.146 --rc genhtml_function_coverage=1 00:15:22.146 --rc genhtml_legend=1 00:15:22.146 --rc geninfo_all_blocks=1 00:15:22.146 --rc geninfo_unexecuted_blocks=1 00:15:22.146 00:15:22.146 ' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:22.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.147 --rc genhtml_branch_coverage=1 00:15:22.147 --rc genhtml_function_coverage=1 00:15:22.147 --rc genhtml_legend=1 00:15:22.147 --rc geninfo_all_blocks=1 00:15:22.147 --rc geninfo_unexecuted_blocks=1 00:15:22.147 00:15:22.147 ' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:22.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.147 --rc genhtml_branch_coverage=1 00:15:22.147 --rc genhtml_function_coverage=1 00:15:22.147 --rc genhtml_legend=1 00:15:22.147 --rc geninfo_all_blocks=1 00:15:22.147 --rc geninfo_unexecuted_blocks=1 00:15:22.147 00:15:22.147 ' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72457 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72457 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72457 ']' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.147 09:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:22.147 [2024-11-07 09:43:49.750200] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:15:22.147 [2024-11-07 09:43:49.750339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72457 ] 00:15:22.406 [2024-11-07 09:43:49.910073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.406 [2024-11-07 09:43:49.999890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.406 [2024-11-07 09:43:50.000140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.406 [2024-11-07 09:43:50.000218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.972 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.972 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:15:22.972 09:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:22.972 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:22.972 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:22.972 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:22.973 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:22.973 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:23.231 09:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:23.490 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:23.490 { 00:15:23.490 "name": "nvme0n1", 00:15:23.490 "aliases": [ 00:15:23.490 "49beb61e-c985-4ecb-a37e-52ad23e8b93e" 00:15:23.490 ], 00:15:23.490 "product_name": "NVMe disk", 00:15:23.490 "block_size": 4096, 00:15:23.490 "num_blocks": 1310720, 00:15:23.490 "uuid": "49beb61e-c985-4ecb-a37e-52ad23e8b93e", 00:15:23.490 "numa_id": -1, 00:15:23.490 "assigned_rate_limits": { 00:15:23.490 "rw_ios_per_sec": 0, 00:15:23.490 "rw_mbytes_per_sec": 0, 00:15:23.490 "r_mbytes_per_sec": 0, 00:15:23.490 "w_mbytes_per_sec": 0 00:15:23.490 }, 00:15:23.490 "claimed": false, 00:15:23.490 "zoned": false, 00:15:23.490 "supported_io_types": { 00:15:23.490 "read": true, 00:15:23.490 "write": true, 00:15:23.490 "unmap": true, 00:15:23.490 "flush": true, 00:15:23.490 "reset": true, 00:15:23.490 "nvme_admin": true, 00:15:23.490 "nvme_io": true, 00:15:23.490 "nvme_io_md": false, 00:15:23.490 "write_zeroes": true, 00:15:23.490 "zcopy": false, 00:15:23.490 "get_zone_info": false, 00:15:23.490 "zone_management": false, 00:15:23.490 "zone_append": false, 00:15:23.490 "compare": true, 00:15:23.490 "compare_and_write": false, 00:15:23.490 "abort": true, 00:15:23.490 "seek_hole": false, 00:15:23.490 "seek_data": false, 00:15:23.490 "copy": true, 00:15:23.490 "nvme_iov_md": false 00:15:23.490 }, 00:15:23.490 "driver_specific": { 00:15:23.490 "nvme": [ 00:15:23.490 { 00:15:23.490 "pci_address": "0000:00:11.0", 00:15:23.490 "trid": { 00:15:23.490 "trtype": "PCIe", 00:15:23.490 "traddr": "0000:00:11.0" 00:15:23.490 }, 00:15:23.490 "ctrlr_data": { 00:15:23.490 "cntlid": 0, 00:15:23.490 "vendor_id": "0x1b36", 00:15:23.490 "model_number": "QEMU NVMe Ctrl", 00:15:23.490 "serial_number": "12341", 00:15:23.490 "firmware_revision": "8.0.0", 00:15:23.490 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:23.490 "oacs": { 00:15:23.490 "security": 0, 00:15:23.490 "format": 1, 00:15:23.490 "firmware": 0, 00:15:23.490 "ns_manage": 1 00:15:23.490 }, 00:15:23.490 "multi_ctrlr": false, 00:15:23.490 "ana_reporting": false 00:15:23.490 }, 00:15:23.490 "vs": { 00:15:23.490 "nvme_version": "1.4" 00:15:23.491 }, 00:15:23.491 "ns_data": { 00:15:23.491 "id": 1, 00:15:23.491 "can_share": false 00:15:23.491 } 00:15:23.491 } 00:15:23.491 ], 00:15:23.491 "mp_policy": "active_passive" 00:15:23.491 } 00:15:23.491 } 00:15:23.491 ]' 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:23.491 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:23.749 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:23.749 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:24.007 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=dc39a886-43c7-4c25-8152-e7d39af83701 00:15:24.007 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dc39a886-43c7-4c25-8152-e7d39af83701 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.266 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:24.266 { 00:15:24.266 "name": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:24.266 "aliases": [ 00:15:24.266 "lvs/nvme0n1p0" 00:15:24.266 ], 00:15:24.266 "product_name": "Logical Volume", 00:15:24.266 "block_size": 4096, 00:15:24.266 "num_blocks": 26476544, 00:15:24.266 "uuid": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:24.266 "assigned_rate_limits": { 00:15:24.266 "rw_ios_per_sec": 0, 00:15:24.267 "rw_mbytes_per_sec": 0, 00:15:24.267 "r_mbytes_per_sec": 0, 00:15:24.267 "w_mbytes_per_sec": 0 00:15:24.267 }, 00:15:24.267 "claimed": false, 00:15:24.267 "zoned": false, 00:15:24.267 "supported_io_types": { 00:15:24.267 "read": true, 00:15:24.267 "write": true, 00:15:24.267 "unmap": true, 00:15:24.267 "flush": false, 00:15:24.267 "reset": true, 00:15:24.267 "nvme_admin": false, 00:15:24.267 "nvme_io": false, 00:15:24.267 "nvme_io_md": false, 00:15:24.267 "write_zeroes": true, 00:15:24.267 "zcopy": false, 00:15:24.267 "get_zone_info": false, 00:15:24.267 "zone_management": false, 00:15:24.267 "zone_append": false, 00:15:24.267 "compare": false, 00:15:24.267 "compare_and_write": false, 00:15:24.267 "abort": false, 00:15:24.267 "seek_hole": true, 00:15:24.267 "seek_data": true, 00:15:24.267 "copy": false, 00:15:24.267 "nvme_iov_md": false 00:15:24.267 }, 00:15:24.267 "driver_specific": { 00:15:24.267 "lvol": { 00:15:24.267 "lvol_store_uuid": "dc39a886-43c7-4c25-8152-e7d39af83701", 00:15:24.267 "base_bdev": "nvme0n1", 00:15:24.267 "thin_provision": true, 00:15:24.267 "num_allocated_clusters": 0, 00:15:24.267 "snapshot": false, 00:15:24.267 "clone": false, 00:15:24.267 "esnap_clone": false 00:15:24.267 } 00:15:24.267 } 00:15:24.267 } 00:15:24.267 ]' 00:15:24.267 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:24.525 09:43:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:24.784 { 00:15:24.784 "name": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:24.784 "aliases": [ 00:15:24.784 "lvs/nvme0n1p0" 00:15:24.784 ], 00:15:24.784 "product_name": "Logical Volume", 00:15:24.784 "block_size": 4096, 00:15:24.784 "num_blocks": 26476544, 00:15:24.784 "uuid": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:24.784 "assigned_rate_limits": { 00:15:24.784 "rw_ios_per_sec": 0, 00:15:24.784 "rw_mbytes_per_sec": 0, 00:15:24.784 "r_mbytes_per_sec": 0, 00:15:24.784 "w_mbytes_per_sec": 0 00:15:24.784 }, 00:15:24.784 "claimed": false, 00:15:24.784 "zoned": false, 00:15:24.784 "supported_io_types": { 00:15:24.784 "read": true, 00:15:24.784 "write": true, 00:15:24.784 "unmap": true, 00:15:24.784 "flush": false, 00:15:24.784 "reset": true, 00:15:24.784 "nvme_admin": false, 00:15:24.784 "nvme_io": false, 00:15:24.784 "nvme_io_md": false, 00:15:24.784 "write_zeroes": true, 00:15:24.784 "zcopy": false, 00:15:24.784 "get_zone_info": false, 00:15:24.784 "zone_management": false, 00:15:24.784 "zone_append": false, 00:15:24.784 "compare": false, 00:15:24.784 "compare_and_write": false, 00:15:24.784 "abort": false, 00:15:24.784 "seek_hole": true, 00:15:24.784 "seek_data": true, 00:15:24.784 "copy": false, 00:15:24.784 "nvme_iov_md": false 00:15:24.784 }, 00:15:24.784 "driver_specific": { 00:15:24.784 "lvol": { 00:15:24.784 "lvol_store_uuid": "dc39a886-43c7-4c25-8152-e7d39af83701", 00:15:24.784 "base_bdev": "nvme0n1", 00:15:24.784 "thin_provision": true, 00:15:24.784 "num_allocated_clusters": 0, 00:15:24.784 "snapshot": false, 00:15:24.784 "clone": false, 00:15:24.784 "esnap_clone": false 00:15:24.784 } 00:15:24.784 } 00:15:24.784 } 00:15:24.784 ]' 00:15:24.784 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:25.043 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:15:25.043 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15f22e01-9c3e-4dcf-abd4-074464bdb507 00:15:25.302 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:25.302 { 00:15:25.302 "name": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:25.302 "aliases": [ 00:15:25.302 "lvs/nvme0n1p0" 00:15:25.302 ], 00:15:25.302 "product_name": "Logical Volume", 00:15:25.302 "block_size": 4096, 00:15:25.302 "num_blocks": 26476544, 00:15:25.302 "uuid": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:25.302 "assigned_rate_limits": { 00:15:25.302 "rw_ios_per_sec": 0, 00:15:25.302 "rw_mbytes_per_sec": 0, 00:15:25.302 "r_mbytes_per_sec": 0, 00:15:25.302 "w_mbytes_per_sec": 0 00:15:25.302 }, 00:15:25.302 "claimed": false, 00:15:25.302 "zoned": false, 00:15:25.302 "supported_io_types": { 00:15:25.302 "read": true, 00:15:25.302 "write": true, 00:15:25.302 "unmap": true, 00:15:25.302 "flush": false, 00:15:25.302 "reset": true, 00:15:25.302 "nvme_admin": false, 00:15:25.302 "nvme_io": false, 00:15:25.302 "nvme_io_md": false, 00:15:25.302 "write_zeroes": true, 00:15:25.302 "zcopy": false, 00:15:25.302 "get_zone_info": false, 00:15:25.302 "zone_management": false, 00:15:25.302 "zone_append": false, 00:15:25.302 "compare": false, 00:15:25.302 "compare_and_write": false, 00:15:25.302 "abort": false, 00:15:25.302 "seek_hole": true, 00:15:25.302 "seek_data": true, 00:15:25.302 "copy": false, 00:15:25.302 "nvme_iov_md": false 00:15:25.302 }, 00:15:25.302 "driver_specific": { 00:15:25.302 "lvol": { 00:15:25.302 "lvol_store_uuid": "dc39a886-43c7-4c25-8152-e7d39af83701", 00:15:25.302 "base_bdev": "nvme0n1", 00:15:25.302 "thin_provision": true, 00:15:25.302 "num_allocated_clusters": 0, 00:15:25.302 "snapshot": false, 00:15:25.302 "clone": false, 00:15:25.302 "esnap_clone": false 00:15:25.302 } 00:15:25.302 } 00:15:25.302 } 00:15:25.302 ]' 00:15:25.302 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:25.302 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:15:25.302 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:25.562 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:15:25.562 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:15:25.562 09:43:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:15:25.562 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:25.562 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:25.562 09:43:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 15f22e01-9c3e-4dcf-abd4-074464bdb507 -c nvc0n1p0 --l2p_dram_limit 60 00:15:25.562 [2024-11-07 09:43:53.160655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.160792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:25.562 [2024-11-07 09:43:53.160811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:25.562 [2024-11-07 09:43:53.160819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.160875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.160885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:25.562 [2024-11-07 09:43:53.160892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:25.562 [2024-11-07 09:43:53.160899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.160927] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:25.562 [2024-11-07 09:43:53.161484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:25.562 [2024-11-07 09:43:53.161500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.161506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:25.562 [2024-11-07 09:43:53.161515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:15:25.562 [2024-11-07 09:43:53.161521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.161579] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4729fce7-cd08-4570-a6a5-e0e7d0a6e5d4 00:15:25.562 [2024-11-07 09:43:53.162548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.162580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:25.562 [2024-11-07 09:43:53.162588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:25.562 [2024-11-07 09:43:53.162595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.167500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.167530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:25.562 [2024-11-07 09:43:53.167538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:15:25.562 [2024-11-07 09:43:53.167546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.167638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.167647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:25.562 [2024-11-07 09:43:53.167654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:25.562 [2024-11-07 09:43:53.167664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.167706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.167715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:25.562 [2024-11-07 09:43:53.167722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:25.562 [2024-11-07 09:43:53.167730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.167752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:25.562 [2024-11-07 09:43:53.170697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.170721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:25.562 [2024-11-07 09:43:53.170732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:15:25.562 [2024-11-07 09:43:53.170740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.170771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.170778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:25.562 [2024-11-07 09:43:53.170785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:25.562 [2024-11-07 09:43:53.170791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.170815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:25.562 [2024-11-07 09:43:53.170934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:25.562 [2024-11-07 09:43:53.170947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:25.562 [2024-11-07 09:43:53.170955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:25.562 [2024-11-07 09:43:53.170964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:25.562 [2024-11-07 09:43:53.170971] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:25.562 [2024-11-07 09:43:53.170978] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:25.562 [2024-11-07 09:43:53.170985] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:25.562 [2024-11-07 09:43:53.170992] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:25.562 [2024-11-07 09:43:53.170997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:25.562 [2024-11-07 09:43:53.171005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.171012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:25.562 [2024-11-07 09:43:53.171021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:15:25.562 [2024-11-07 09:43:53.171026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.171100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.562 [2024-11-07 09:43:53.171107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:25.562 [2024-11-07 09:43:53.171114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:25.562 [2024-11-07 09:43:53.171120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.562 [2024-11-07 09:43:53.171231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:25.562 [2024-11-07 09:43:53.171239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:25.562 [2024-11-07 09:43:53.171249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:25.562 [2024-11-07 09:43:53.171268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:25.562 [2024-11-07 09:43:53.171286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:25.562 [2024-11-07 09:43:53.171298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:25.562 [2024-11-07 09:43:53.171304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:25.562 [2024-11-07 09:43:53.171315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:25.562 [2024-11-07 09:43:53.171320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:25.562 [2024-11-07 09:43:53.171326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:25.562 [2024-11-07 09:43:53.171331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:25.562 [2024-11-07 09:43:53.171346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:25.562 [2024-11-07 09:43:53.171365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:25.562 [2024-11-07 09:43:53.171381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:25.562 [2024-11-07 09:43:53.171399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:25.562 [2024-11-07 09:43:53.171416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:25.562 [2024-11-07 09:43:53.171422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.562 [2024-11-07 09:43:53.171428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:25.562 [2024-11-07 09:43:53.171436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:25.563 [2024-11-07 09:43:53.171441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:25.563 [2024-11-07 09:43:53.171447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:25.563 [2024-11-07 09:43:53.171463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:25.563 [2024-11-07 09:43:53.171470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:25.563 [2024-11-07 09:43:53.171475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:25.563 [2024-11-07 09:43:53.171482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:25.563 [2024-11-07 09:43:53.171487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.563 [2024-11-07 09:43:53.171493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:25.563 [2024-11-07 09:43:53.171498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:25.563 [2024-11-07 09:43:53.171506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.563 [2024-11-07 09:43:53.171511] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:25.563 [2024-11-07 09:43:53.171519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:25.563 [2024-11-07 09:43:53.171524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:25.563 [2024-11-07 09:43:53.171531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.563 [2024-11-07 09:43:53.171537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:25.563 [2024-11-07 09:43:53.171546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:25.563 [2024-11-07 09:43:53.171551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:25.563 [2024-11-07 09:43:53.171558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:25.563 [2024-11-07 09:43:53.171563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:25.563 [2024-11-07 09:43:53.171570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:25.563 [2024-11-07 09:43:53.171579] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:25.563 [2024-11-07 09:43:53.171587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:25.563 [2024-11-07 09:43:53.171602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:25.563 [2024-11-07 09:43:53.171608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:25.563 [2024-11-07 09:43:53.171615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:25.563 [2024-11-07 09:43:53.171620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:25.563 [2024-11-07 09:43:53.171637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:25.563 [2024-11-07 09:43:53.171643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:25.563 [2024-11-07 09:43:53.171650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:25.563 [2024-11-07 09:43:53.171656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:25.563 [2024-11-07 09:43:53.171664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:25.563 [2024-11-07 09:43:53.171696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:25.563 [2024-11-07 09:43:53.171704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:25.563 [2024-11-07 09:43:53.171719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:25.563 [2024-11-07 09:43:53.171725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:25.563 [2024-11-07 09:43:53.171732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:25.563 [2024-11-07 09:43:53.171738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.563 [2024-11-07 09:43:53.171747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:25.563 [2024-11-07 09:43:53.171753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:15:25.563 [2024-11-07 09:43:53.171759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.563 [2024-11-07 09:43:53.171816] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:25.563 [2024-11-07 09:43:53.171832] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:28.855 [2024-11-07 09:43:56.071216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.071276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:28.855 [2024-11-07 09:43:56.071294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2899.389 ms 00:15:28.855 [2024-11-07 09:43:56.071304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.096800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.096846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:28.855 [2024-11-07 09:43:56.096859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.284 ms 00:15:28.855 [2024-11-07 09:43:56.096868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.097000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.097013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:28.855 [2024-11-07 09:43:56.097021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:28.855 [2024-11-07 09:43:56.097032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.141400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.141465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:28.855 [2024-11-07 09:43:56.141490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.318 ms 00:15:28.855 [2024-11-07 09:43:56.141509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.141571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.141589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:28.855 [2024-11-07 09:43:56.141604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:28.855 [2024-11-07 09:43:56.141619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.142146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.142186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:28.855 [2024-11-07 09:43:56.142201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:15:28.855 [2024-11-07 09:43:56.142221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.142446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.142468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:28.855 [2024-11-07 09:43:56.142483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:15:28.855 [2024-11-07 09:43:56.142501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.159040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.159071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:28.855 [2024-11-07 09:43:56.159080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.493 ms 00:15:28.855 [2024-11-07 09:43:56.159090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.170489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:28.855 [2024-11-07 09:43:56.184933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.184976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:28.855 [2024-11-07 09:43:56.184988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.751 ms 00:15:28.855 [2024-11-07 09:43:56.184998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.239246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.239281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:28.855 [2024-11-07 09:43:56.239298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.211 ms 00:15:28.855 [2024-11-07 09:43:56.239306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.239488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.239499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:28.855 [2024-11-07 09:43:56.239512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:28.855 [2024-11-07 09:43:56.239519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.262228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.262262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:28.855 [2024-11-07 09:43:56.262275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.652 ms 00:15:28.855 [2024-11-07 09:43:56.262283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.284448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.284490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:28.855 [2024-11-07 09:43:56.284504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.117 ms 00:15:28.855 [2024-11-07 09:43:56.284512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.285084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.285101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:28.855 [2024-11-07 09:43:56.285112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:15:28.855 [2024-11-07 09:43:56.285119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.349504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.349539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:28.855 [2024-11-07 09:43:56.349555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.342 ms 00:15:28.855 [2024-11-07 09:43:56.349565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.373454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.373488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:28.855 [2024-11-07 09:43:56.373501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.778 ms 00:15:28.855 [2024-11-07 09:43:56.373509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.396762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.396793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:28.855 [2024-11-07 09:43:56.396806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:15:28.855 [2024-11-07 09:43:56.396813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.419754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.419788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:28.855 [2024-11-07 09:43:56.419801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.900 ms 00:15:28.855 [2024-11-07 09:43:56.419809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.419853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.419862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:28.855 [2024-11-07 09:43:56.419874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:28.855 [2024-11-07 09:43:56.419884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.419970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.855 [2024-11-07 09:43:56.419980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:28.855 [2024-11-07 09:43:56.419991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:28.855 [2024-11-07 09:43:56.419998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.855 [2024-11-07 09:43:56.421049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3259.939 ms, result 0 00:15:28.855 { 00:15:28.855 "name": "ftl0", 00:15:28.855 "uuid": "4729fce7-cd08-4570-a6a5-e0e7d0a6e5d4" 00:15:28.855 } 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.855 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:29.113 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:29.371 [ 00:15:29.371 { 00:15:29.371 "name": "ftl0", 00:15:29.371 "aliases": [ 00:15:29.371 "4729fce7-cd08-4570-a6a5-e0e7d0a6e5d4" 00:15:29.371 ], 00:15:29.371 "product_name": "FTL disk", 00:15:29.371 "block_size": 4096, 00:15:29.371 "num_blocks": 20971520, 00:15:29.371 "uuid": "4729fce7-cd08-4570-a6a5-e0e7d0a6e5d4", 00:15:29.371 "assigned_rate_limits": { 00:15:29.371 "rw_ios_per_sec": 0, 00:15:29.371 "rw_mbytes_per_sec": 0, 00:15:29.371 "r_mbytes_per_sec": 0, 00:15:29.371 "w_mbytes_per_sec": 0 00:15:29.371 }, 00:15:29.371 "claimed": false, 00:15:29.371 "zoned": false, 00:15:29.371 "supported_io_types": { 00:15:29.371 "read": true, 00:15:29.371 "write": true, 00:15:29.371 "unmap": true, 00:15:29.371 "flush": true, 00:15:29.371 "reset": false, 00:15:29.371 "nvme_admin": false, 00:15:29.371 "nvme_io": false, 00:15:29.371 "nvme_io_md": false, 00:15:29.371 "write_zeroes": true, 00:15:29.371 "zcopy": false, 00:15:29.371 "get_zone_info": false, 00:15:29.371 "zone_management": false, 00:15:29.371 "zone_append": false, 00:15:29.371 "compare": false, 00:15:29.371 "compare_and_write": false, 00:15:29.371 "abort": false, 00:15:29.371 "seek_hole": false, 00:15:29.371 "seek_data": false, 00:15:29.371 "copy": false, 00:15:29.371 "nvme_iov_md": false 00:15:29.371 }, 00:15:29.371 "driver_specific": { 00:15:29.371 "ftl": { 00:15:29.371 "base_bdev": "15f22e01-9c3e-4dcf-abd4-074464bdb507", 00:15:29.371 "cache": "nvc0n1p0" 00:15:29.371 } 00:15:29.371 } 00:15:29.371 } 00:15:29.371 ] 00:15:29.371 09:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:15:29.371 09:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:29.371 09:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:29.629 09:43:57 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:29.629 09:43:57 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:29.629 [2024-11-07 09:43:57.242130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.242176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:29.629 [2024-11-07 09:43:57.242189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:29.629 [2024-11-07 09:43:57.242199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.242237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:29.629 [2024-11-07 09:43:57.244940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.244972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:29.629 [2024-11-07 09:43:57.244984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:15:29.629 [2024-11-07 09:43:57.244993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.245489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.245511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:29.629 [2024-11-07 09:43:57.245522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:15:29.629 [2024-11-07 09:43:57.245530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.248804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.248828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:29.629 [2024-11-07 09:43:57.248840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:15:29.629 [2024-11-07 09:43:57.248848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.255076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.255105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:29.629 [2024-11-07 09:43:57.255117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.196 ms 00:15:29.629 [2024-11-07 09:43:57.255125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.278351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.278493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:29.629 [2024-11-07 09:43:57.278516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.113 ms 00:15:29.629 [2024-11-07 09:43:57.278524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.293324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.293446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:29.629 [2024-11-07 09:43:57.293518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.734 ms 00:15:29.629 [2024-11-07 09:43:57.293593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.629 [2024-11-07 09:43:57.293845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.629 [2024-11-07 09:43:57.293892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:29.629 [2024-11-07 09:43:57.293956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:15:29.629 [2024-11-07 09:43:57.294035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-07 09:43:57.317227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-07 09:43:57.317341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:29.888 [2024-11-07 09:43:57.317418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.136 ms 00:15:29.888 [2024-11-07 09:43:57.317445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-07 09:43:57.341318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-07 09:43:57.341434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:29.888 [2024-11-07 09:43:57.341510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.817 ms 00:15:29.888 [2024-11-07 09:43:57.341543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-07 09:43:57.364507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-07 09:43:57.364617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:29.888 [2024-11-07 09:43:57.364698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.908 ms 00:15:29.888 [2024-11-07 09:43:57.364728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.888 [2024-11-07 09:43:57.387635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.888 [2024-11-07 09:43:57.387748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:29.888 [2024-11-07 09:43:57.387815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.787 ms 00:15:29.888 [2024-11-07 09:43:57.387844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.889 [2024-11-07 09:43:57.387894] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:29.889 [2024-11-07 09:43:57.387922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.387955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.387984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.388946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.389948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.390982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:29.889 [2024-11-07 09:43:57.391482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:29.889 [2024-11-07 09:43:57.391492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4729fce7-cd08-4570-a6a5-e0e7d0a6e5d4 00:15:29.889 [2024-11-07 09:43:57.391500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:29.889 [2024-11-07 09:43:57.391510] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:29.890 [2024-11-07 09:43:57.391517] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:29.890 [2024-11-07 09:43:57.391528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:29.890 [2024-11-07 09:43:57.391543] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:29.890 [2024-11-07 09:43:57.391553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:29.890 [2024-11-07 09:43:57.391562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:29.890 [2024-11-07 09:43:57.391570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:29.890 [2024-11-07 09:43:57.391577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:29.890 [2024-11-07 09:43:57.391587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.890 [2024-11-07 09:43:57.391602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:29.890 [2024-11-07 09:43:57.391617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:15:29.890 [2024-11-07 09:43:57.391625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.404649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.890 [2024-11-07 09:43:57.404763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:29.890 [2024-11-07 09:43:57.404785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:15:29.890 [2024-11-07 09:43:57.404793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.405152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.890 [2024-11-07 09:43:57.405168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:29.890 [2024-11-07 09:43:57.405178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:15:29.890 [2024-11-07 09:43:57.405186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.450413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.890 [2024-11-07 09:43:57.450456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:29.890 [2024-11-07 09:43:57.450468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.890 [2024-11-07 09:43:57.450476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.450550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.890 [2024-11-07 09:43:57.450559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:29.890 [2024-11-07 09:43:57.450569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.890 [2024-11-07 09:43:57.450576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.450690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.890 [2024-11-07 09:43:57.450701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:29.890 [2024-11-07 09:43:57.450713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.890 [2024-11-07 09:43:57.450721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.450745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.890 [2024-11-07 09:43:57.450754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:29.890 [2024-11-07 09:43:57.450763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.890 [2024-11-07 09:43:57.450770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.890 [2024-11-07 09:43:57.534262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.890 [2024-11-07 09:43:57.534306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:29.890 [2024-11-07 09:43:57.534318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.890 [2024-11-07 09:43:57.534326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.598825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.598862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:30.148 [2024-11-07 09:43:57.598874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.598882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.598975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.598985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:30.148 [2024-11-07 09:43:57.598995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.599004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.599074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.599083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:30.148 [2024-11-07 09:43:57.599092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.599099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.599226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.599237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:30.148 [2024-11-07 09:43:57.599246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.599253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.599304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.599313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:30.148 [2024-11-07 09:43:57.599322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.599330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.599380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.599388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:30.148 [2024-11-07 09:43:57.599397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.599404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.599458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.148 [2024-11-07 09:43:57.599467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:30.148 [2024-11-07 09:43:57.599476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.148 [2024-11-07 09:43:57.599483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.148 [2024-11-07 09:43:57.599667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.494 ms, result 0 00:15:30.148 true 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72457 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72457 ']' 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72457 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72457 00:15:30.148 killing process with pid 72457 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72457' 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72457 00:15:30.148 09:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72457 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:45.015 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:45.016 09:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:45.016 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:45.016 fio-3.35 00:15:45.016 Starting 1 thread 00:15:46.921 00:15:46.921 test: (groupid=0, jobs=1): err= 0: pid=72650: Thu Nov 7 09:44:14 2024 00:15:46.921 read: IOPS=1232, BW=81.9MiB/s (85.8MB/s)(255MiB/3109msec) 00:15:46.921 slat (nsec): min=2916, max=35404, avg=4184.50, stdev=1940.21 00:15:46.921 clat (usec): min=186, max=1202, avg=364.53, stdev=99.45 00:15:46.921 lat (usec): min=190, max=1207, avg=368.72, stdev=99.78 00:15:46.921 clat percentiles (usec): 00:15:46.921 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 314], 00:15:46.921 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 326], 00:15:46.921 | 70.00th=[ 334], 80.00th=[ 424], 90.00th=[ 523], 95.00th=[ 545], 00:15:46.921 | 99.00th=[ 734], 99.50th=[ 799], 99.90th=[ 988], 99.95th=[ 1012], 00:15:46.921 | 99.99th=[ 1205] 00:15:46.921 write: IOPS=1241, BW=82.4MiB/s (86.4MB/s)(256MiB/3106msec); 0 zone resets 00:15:46.921 slat (nsec): min=13662, max=85581, avg=18433.69, stdev=4337.21 00:15:46.921 clat (usec): min=277, max=1269, avg=407.96, stdev=130.55 00:15:46.921 lat (usec): min=316, max=1326, avg=426.39, stdev=131.08 00:15:46.921 clat percentiles (usec): 00:15:46.921 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 338], 00:15:46.921 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 355], 00:15:46.921 | 70.00th=[ 371], 80.00th=[ 478], 90.00th=[ 627], 95.00th=[ 709], 00:15:46.921 | 99.00th=[ 816], 99.50th=[ 898], 99.90th=[ 1029], 99.95th=[ 1037], 00:15:46.921 | 99.99th=[ 1270] 00:15:46.921 bw ( KiB/s): min=70856, max=94248, per=99.53%, avg=84025.33, stdev=8741.92, samples=6 00:15:46.921 iops : min= 1042, max= 1386, avg=1235.67, stdev=128.56, samples=6 00:15:46.922 lat (usec) : 250=0.05%, 500=84.78%, 750=13.36%, 1000=1.69% 00:15:46.922 lat (msec) : 2=0.12% 00:15:46.922 cpu : usr=98.97%, sys=0.16%, ctx=10, majf=0, minf=1169 00:15:46.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.922 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.922 00:15:46.922 Run status group 0 (all jobs): 00:15:46.922 READ: bw=81.9MiB/s (85.8MB/s), 81.9MiB/s-81.9MiB/s (85.8MB/s-85.8MB/s), io=255MiB (267MB), run=3109-3109msec 00:15:46.922 WRITE: bw=82.4MiB/s (86.4MB/s), 82.4MiB/s-82.4MiB/s (86.4MB/s-86.4MB/s), io=256MiB (269MB), run=3106-3106msec 00:15:48.307 ----------------------------------------------------- 00:15:48.307 Suppressions used: 00:15:48.307 count bytes template 00:15:48.307 1 5 /usr/src/fio/parse.c 00:15:48.307 1 8 libtcmalloc_minimal.so 00:15:48.307 1 904 libcrypto.so 00:15:48.307 ----------------------------------------------------- 00:15:48.307 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.307 09:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.308 09:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:48.566 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:48.566 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:48.566 fio-3.35 00:15:48.566 Starting 2 threads 00:16:15.118 00:16:15.118 first_half: (groupid=0, jobs=1): err= 0: pid=72742: Thu Nov 7 09:44:41 2024 00:16:15.118 read: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(255MiB/23813msec) 00:16:15.118 slat (nsec): min=2983, max=67191, avg=3818.24, stdev=737.01 00:16:15.118 clat (usec): min=616, max=383528, avg=34668.35, stdev=21077.73 00:16:15.118 lat (usec): min=620, max=383532, avg=34672.17, stdev=21077.74 00:16:15.118 clat percentiles (msec): 00:16:15.118 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:16:15.118 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:16:15.118 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 47], 00:16:15.118 | 99.00th=[ 148], 99.50th=[ 178], 99.90th=[ 275], 99.95th=[ 334], 00:16:15.118 | 99.99th=[ 376] 00:16:15.118 write: IOPS=3277, BW=12.8MiB/s (13.4MB/s)(256MiB/19993msec); 0 zone resets 00:16:15.118 slat (usec): min=3, max=2954, avg= 5.72, stdev=16.42 00:16:15.118 clat (usec): min=353, max=110349, avg=11964.65, stdev=20067.51 00:16:15.118 lat (usec): min=362, max=110354, avg=11970.37, stdev=20067.56 00:16:15.118 clat percentiles (usec): 00:16:15.118 | 1.00th=[ 734], 5.00th=[ 988], 10.00th=[ 1188], 20.00th=[ 1745], 00:16:15.118 | 30.00th=[ 3326], 40.00th=[ 4817], 50.00th=[ 5800], 60.00th=[ 6390], 00:16:15.118 | 70.00th=[ 7570], 80.00th=[ 11600], 90.00th=[ 21890], 95.00th=[ 68682], 00:16:15.118 | 99.00th=[ 89654], 99.50th=[ 96994], 99.90th=[106431], 99.95th=[108528], 00:16:15.118 | 99.99th=[109577] 00:16:15.118 bw ( KiB/s): min= 856, max=41720, per=79.97%, avg=20971.52, stdev=11213.82, samples=25 00:16:15.118 iops : min= 214, max=10430, avg=5242.88, stdev=2803.46, samples=25 00:16:15.118 lat (usec) : 500=0.03%, 750=0.56%, 1000=2.07% 00:16:15.118 lat (msec) : 2=8.55%, 4=6.40%, 10=21.50%, 20=6.58%, 50=47.62% 00:16:15.118 lat (msec) : 100=5.45%, 250=1.16%, 500=0.07% 00:16:15.118 cpu : usr=99.39%, sys=0.16%, ctx=83, majf=0, minf=5569 00:16:15.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:15.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.118 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.118 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.118 second_half: (groupid=0, jobs=1): err= 0: pid=72743: Thu Nov 7 09:44:41 2024 00:16:15.118 read: IOPS=2727, BW=10.7MiB/s (11.2MB/s)(255MiB/23901msec) 00:16:15.118 slat (nsec): min=2983, max=53189, avg=3712.67, stdev=733.21 00:16:15.118 clat (usec): min=698, max=387687, avg=34640.29, stdev=21939.68 00:16:15.118 lat (usec): min=702, max=387691, avg=34644.00, stdev=21939.71 00:16:15.118 clat percentiles (msec): 00:16:15.118 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 29], 00:16:15.118 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:16:15.118 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 45], 00:16:15.118 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 220], 99.95th=[ 284], 00:16:15.118 | 99.99th=[ 380] 00:16:15.118 write: IOPS=3446, BW=13.5MiB/s (14.1MB/s)(256MiB/19017msec); 0 zone resets 00:16:15.118 slat (usec): min=3, max=1046, avg= 5.49, stdev= 5.58 00:16:15.118 clat (usec): min=350, max=111090, avg=12211.73, stdev=20582.83 00:16:15.118 lat (usec): min=367, max=111095, avg=12217.21, stdev=20582.84 00:16:15.118 clat percentiles (usec): 00:16:15.118 | 1.00th=[ 734], 5.00th=[ 963], 10.00th=[ 1106], 20.00th=[ 1336], 00:16:15.118 | 30.00th=[ 1680], 40.00th=[ 2999], 50.00th=[ 4555], 60.00th=[ 6259], 00:16:15.118 | 70.00th=[ 8848], 80.00th=[ 13173], 90.00th=[ 33817], 95.00th=[ 68682], 00:16:15.118 | 99.00th=[ 90702], 99.50th=[ 98042], 99.90th=[105382], 99.95th=[108528], 00:16:15.118 | 99.99th=[110625] 00:16:15.118 bw ( KiB/s): min= 224, max=40576, per=86.94%, avg=22798.04, stdev=11425.25, samples=23 00:16:15.118 iops : min= 56, max=10144, avg=5699.48, stdev=2856.28, samples=23 00:16:15.118 lat (usec) : 500=0.02%, 750=0.56%, 1000=2.49% 00:16:15.118 lat (msec) : 2=14.23%, 4=6.71%, 10=13.46%, 20=7.45%, 50=48.41% 00:16:15.118 lat (msec) : 100=5.18%, 250=1.46%, 500=0.04% 00:16:15.118 cpu : usr=99.26%, sys=0.10%, ctx=33, majf=0, minf=5546 00:16:15.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:15.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.118 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.118 issued rwts: total=65183,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.118 00:16:15.118 Run status group 0 (all jobs): 00:16:15.118 READ: bw=21.3MiB/s (22.4MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=509MiB (534MB), run=23813-23901msec 00:16:15.118 WRITE: bw=25.6MiB/s (26.9MB/s), 12.8MiB/s-13.5MiB/s (13.4MB/s-14.1MB/s), io=512MiB (537MB), run=19017-19993msec 00:16:17.027 ----------------------------------------------------- 00:16:17.027 Suppressions used: 00:16:17.027 count bytes template 00:16:17.027 2 10 /usr/src/fio/parse.c 00:16:17.027 3 288 /usr/src/fio/iolog.c 00:16:17.027 1 8 libtcmalloc_minimal.so 00:16:17.027 1 904 libcrypto.so 00:16:17.027 ----------------------------------------------------- 00:16:17.027 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:17.027 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:17.028 09:44:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:17.028 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:17.028 fio-3.35 00:16:17.028 Starting 1 thread 00:16:35.156 00:16:35.156 test: (groupid=0, jobs=1): err= 0: pid=73071: Thu Nov 7 09:45:00 2024 00:16:35.156 read: IOPS=7019, BW=27.4MiB/s (28.8MB/s)(255MiB/9289msec) 00:16:35.156 slat (nsec): min=3898, max=40915, avg=4503.09, stdev=945.41 00:16:35.156 clat (usec): min=686, max=39368, avg=18228.13, stdev=2980.34 00:16:35.156 lat (usec): min=692, max=39372, avg=18232.63, stdev=2980.32 00:16:35.156 clat percentiles (usec): 00:16:35.156 | 1.00th=[13435], 5.00th=[14353], 10.00th=[14877], 20.00th=[15270], 00:16:35.156 | 30.00th=[15926], 40.00th=[17171], 50.00th=[18220], 60.00th=[19006], 00:16:35.156 | 70.00th=[19792], 80.00th=[20579], 90.00th=[21890], 95.00th=[23200], 00:16:35.156 | 99.00th=[26346], 99.50th=[27395], 99.90th=[30802], 99.95th=[34866], 00:16:35.156 | 99.99th=[38011] 00:16:35.156 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(256MiB/5132msec); 0 zone resets 00:16:35.156 slat (usec): min=4, max=1083, avg= 7.18, stdev= 6.12 00:16:35.156 clat (usec): min=414, max=80652, avg=9979.74, stdev=13238.76 00:16:35.156 lat (usec): min=445, max=80658, avg=9986.91, stdev=13238.89 00:16:35.156 clat percentiles (usec): 00:16:35.156 | 1.00th=[ 635], 5.00th=[ 799], 10.00th=[ 906], 20.00th=[ 1237], 00:16:35.156 | 30.00th=[ 1795], 40.00th=[ 2966], 50.00th=[ 4686], 60.00th=[ 5473], 00:16:35.156 | 70.00th=[ 7963], 80.00th=[16319], 90.00th=[28967], 95.00th=[44827], 00:16:35.156 | 99.00th=[52691], 99.50th=[54264], 99.90th=[57410], 99.95th=[65799], 00:16:35.156 | 99.99th=[78119] 00:16:35.156 bw ( KiB/s): min=12944, max=90944, per=93.31%, avg=47662.55, stdev=24253.61, samples=11 00:16:35.156 iops : min= 3236, max=22736, avg=11915.64, stdev=6063.40, samples=11 00:16:35.156 lat (usec) : 500=0.03%, 750=1.61%, 1000=5.33% 00:16:35.156 lat (msec) : 2=9.45%, 4=5.27%, 10=14.38%, 20=42.30%, 50=20.50% 00:16:35.157 lat (msec) : 100=1.14% 00:16:35.157 cpu : usr=99.12%, sys=0.17%, ctx=33, majf=0, minf=5565 00:16:35.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:35.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.157 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.157 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.157 00:16:35.157 Run status group 0 (all jobs): 00:16:35.157 READ: bw=27.4MiB/s (28.8MB/s), 27.4MiB/s-27.4MiB/s (28.8MB/s-28.8MB/s), io=255MiB (267MB), run=9289-9289msec 00:16:35.157 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=256MiB (268MB), run=5132-5132msec 00:16:35.157 ----------------------------------------------------- 00:16:35.157 Suppressions used: 00:16:35.157 count bytes template 00:16:35.157 1 5 /usr/src/fio/parse.c 00:16:35.157 2 192 /usr/src/fio/iolog.c 00:16:35.157 1 8 libtcmalloc_minimal.so 00:16:35.157 1 904 libcrypto.so 00:16:35.157 ----------------------------------------------------- 00:16:35.157 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:35.157 Remove shared memory files 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57176 /dev/shm/spdk_tgt_trace.pid71368 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:35.157 ************************************ 00:16:35.157 END TEST ftl_fio_basic 00:16:35.157 ************************************ 00:16:35.157 00:16:35.157 real 1m12.563s 00:16:35.157 user 2m43.615s 00:16:35.157 sys 0m2.976s 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:35.157 09:45:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.157 09:45:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:35.157 09:45:02 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:35.157 09:45:02 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:35.157 09:45:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:35.157 ************************************ 00:16:35.157 START TEST ftl_bdevperf 00:16:35.157 ************************************ 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:35.157 * Looking for test storage... 00:16:35.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.157 --rc genhtml_branch_coverage=1 00:16:35.157 --rc genhtml_function_coverage=1 00:16:35.157 --rc genhtml_legend=1 00:16:35.157 --rc geninfo_all_blocks=1 00:16:35.157 --rc geninfo_unexecuted_blocks=1 00:16:35.157 00:16:35.157 ' 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.157 --rc genhtml_branch_coverage=1 00:16:35.157 --rc genhtml_function_coverage=1 00:16:35.157 --rc genhtml_legend=1 00:16:35.157 --rc geninfo_all_blocks=1 00:16:35.157 --rc geninfo_unexecuted_blocks=1 00:16:35.157 00:16:35.157 ' 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:35.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.157 --rc genhtml_branch_coverage=1 00:16:35.157 --rc genhtml_function_coverage=1 00:16:35.157 --rc genhtml_legend=1 00:16:35.157 --rc geninfo_all_blocks=1 00:16:35.157 --rc geninfo_unexecuted_blocks=1 00:16:35.157 00:16:35.157 ' 00:16:35.157 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:35.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.158 --rc genhtml_branch_coverage=1 00:16:35.158 --rc genhtml_function_coverage=1 00:16:35.158 --rc genhtml_legend=1 00:16:35.158 --rc geninfo_all_blocks=1 00:16:35.158 --rc geninfo_unexecuted_blocks=1 00:16:35.158 00:16:35.158 ' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73323 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73323 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73323 ']' 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.158 09:45:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:35.158 [2024-11-07 09:45:02.375500] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:35.158 [2024-11-07 09:45:02.375786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73323 ] 00:16:35.158 [2024-11-07 09:45:02.538043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.158 [2024-11-07 09:45:02.638343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:35.731 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:35.992 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:36.253 { 00:16:36.253 "name": "nvme0n1", 00:16:36.253 "aliases": [ 00:16:36.253 "16a39b57-96f8-47f8-be4f-c994258475e4" 00:16:36.253 ], 00:16:36.253 "product_name": "NVMe disk", 00:16:36.253 "block_size": 4096, 00:16:36.253 "num_blocks": 1310720, 00:16:36.253 "uuid": "16a39b57-96f8-47f8-be4f-c994258475e4", 00:16:36.253 "numa_id": -1, 00:16:36.253 "assigned_rate_limits": { 00:16:36.253 "rw_ios_per_sec": 0, 00:16:36.253 "rw_mbytes_per_sec": 0, 00:16:36.253 "r_mbytes_per_sec": 0, 00:16:36.253 "w_mbytes_per_sec": 0 00:16:36.253 }, 00:16:36.253 "claimed": true, 00:16:36.253 "claim_type": "read_many_write_one", 00:16:36.253 "zoned": false, 00:16:36.253 "supported_io_types": { 00:16:36.253 "read": true, 00:16:36.253 "write": true, 00:16:36.253 "unmap": true, 00:16:36.253 "flush": true, 00:16:36.253 "reset": true, 00:16:36.253 "nvme_admin": true, 00:16:36.253 "nvme_io": true, 00:16:36.253 "nvme_io_md": false, 00:16:36.253 "write_zeroes": true, 00:16:36.253 "zcopy": false, 00:16:36.253 "get_zone_info": false, 00:16:36.253 "zone_management": false, 00:16:36.253 "zone_append": false, 00:16:36.253 "compare": true, 00:16:36.253 "compare_and_write": false, 00:16:36.253 "abort": true, 00:16:36.253 "seek_hole": false, 00:16:36.253 "seek_data": false, 00:16:36.253 "copy": true, 00:16:36.253 "nvme_iov_md": false 00:16:36.253 }, 00:16:36.253 "driver_specific": { 00:16:36.253 "nvme": [ 00:16:36.253 { 00:16:36.253 "pci_address": "0000:00:11.0", 00:16:36.253 "trid": { 00:16:36.253 "trtype": "PCIe", 00:16:36.253 "traddr": "0000:00:11.0" 00:16:36.253 }, 00:16:36.253 "ctrlr_data": { 00:16:36.253 "cntlid": 0, 00:16:36.253 "vendor_id": "0x1b36", 00:16:36.253 "model_number": "QEMU NVMe Ctrl", 00:16:36.253 "serial_number": "12341", 00:16:36.253 "firmware_revision": "8.0.0", 00:16:36.253 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:36.253 "oacs": { 00:16:36.253 "security": 0, 00:16:36.253 "format": 1, 00:16:36.253 "firmware": 0, 00:16:36.253 "ns_manage": 1 00:16:36.253 }, 00:16:36.253 "multi_ctrlr": false, 00:16:36.253 "ana_reporting": false 00:16:36.253 }, 00:16:36.253 "vs": { 00:16:36.253 "nvme_version": "1.4" 00:16:36.253 }, 00:16:36.253 "ns_data": { 00:16:36.253 "id": 1, 00:16:36.253 "can_share": false 00:16:36.253 } 00:16:36.253 } 00:16:36.253 ], 00:16:36.253 "mp_policy": "active_passive" 00:16:36.253 } 00:16:36.253 } 00:16:36.253 ]' 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:36.253 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:36.515 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=dc39a886-43c7-4c25-8152-e7d39af83701 00:16:36.515 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:36.515 09:45:03 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc39a886-43c7-4c25-8152-e7d39af83701 00:16:36.775 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:36.775 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=0f2867d6-fc73-4765-9b5a-eec1d691c893 00:16:36.775 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f2867d6-fc73-4765-9b5a-eec1d691c893 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:37.036 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.322 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:37.322 { 00:16:37.322 "name": "316d291d-90f4-45cd-acf5-f767c0e19d06", 00:16:37.322 "aliases": [ 00:16:37.322 "lvs/nvme0n1p0" 00:16:37.322 ], 00:16:37.322 "product_name": "Logical Volume", 00:16:37.322 "block_size": 4096, 00:16:37.322 "num_blocks": 26476544, 00:16:37.322 "uuid": "316d291d-90f4-45cd-acf5-f767c0e19d06", 00:16:37.322 "assigned_rate_limits": { 00:16:37.322 "rw_ios_per_sec": 0, 00:16:37.322 "rw_mbytes_per_sec": 0, 00:16:37.322 "r_mbytes_per_sec": 0, 00:16:37.322 "w_mbytes_per_sec": 0 00:16:37.322 }, 00:16:37.322 "claimed": false, 00:16:37.322 "zoned": false, 00:16:37.322 "supported_io_types": { 00:16:37.323 "read": true, 00:16:37.323 "write": true, 00:16:37.323 "unmap": true, 00:16:37.323 "flush": false, 00:16:37.323 "reset": true, 00:16:37.323 "nvme_admin": false, 00:16:37.323 "nvme_io": false, 00:16:37.323 "nvme_io_md": false, 00:16:37.323 "write_zeroes": true, 00:16:37.323 "zcopy": false, 00:16:37.323 "get_zone_info": false, 00:16:37.323 "zone_management": false, 00:16:37.323 "zone_append": false, 00:16:37.323 "compare": false, 00:16:37.323 "compare_and_write": false, 00:16:37.323 "abort": false, 00:16:37.323 "seek_hole": true, 00:16:37.323 "seek_data": true, 00:16:37.323 "copy": false, 00:16:37.323 "nvme_iov_md": false 00:16:37.323 }, 00:16:37.323 "driver_specific": { 00:16:37.323 "lvol": { 00:16:37.323 "lvol_store_uuid": "0f2867d6-fc73-4765-9b5a-eec1d691c893", 00:16:37.323 "base_bdev": "nvme0n1", 00:16:37.323 "thin_provision": true, 00:16:37.323 "num_allocated_clusters": 0, 00:16:37.323 "snapshot": false, 00:16:37.323 "clone": false, 00:16:37.323 "esnap_clone": false 00:16:37.323 } 00:16:37.323 } 00:16:37.323 } 00:16:37.323 ]' 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:37.323 09:45:04 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:37.583 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:37.843 { 00:16:37.843 "name": "316d291d-90f4-45cd-acf5-f767c0e19d06", 00:16:37.843 "aliases": [ 00:16:37.843 "lvs/nvme0n1p0" 00:16:37.843 ], 00:16:37.843 "product_name": "Logical Volume", 00:16:37.843 "block_size": 4096, 00:16:37.843 "num_blocks": 26476544, 00:16:37.843 "uuid": "316d291d-90f4-45cd-acf5-f767c0e19d06", 00:16:37.843 "assigned_rate_limits": { 00:16:37.843 "rw_ios_per_sec": 0, 00:16:37.843 "rw_mbytes_per_sec": 0, 00:16:37.843 "r_mbytes_per_sec": 0, 00:16:37.843 "w_mbytes_per_sec": 0 00:16:37.843 }, 00:16:37.843 "claimed": false, 00:16:37.843 "zoned": false, 00:16:37.843 "supported_io_types": { 00:16:37.843 "read": true, 00:16:37.843 "write": true, 00:16:37.843 "unmap": true, 00:16:37.843 "flush": false, 00:16:37.843 "reset": true, 00:16:37.843 "nvme_admin": false, 00:16:37.843 "nvme_io": false, 00:16:37.843 "nvme_io_md": false, 00:16:37.843 "write_zeroes": true, 00:16:37.843 "zcopy": false, 00:16:37.843 "get_zone_info": false, 00:16:37.843 "zone_management": false, 00:16:37.843 "zone_append": false, 00:16:37.843 "compare": false, 00:16:37.843 "compare_and_write": false, 00:16:37.843 "abort": false, 00:16:37.843 "seek_hole": true, 00:16:37.843 "seek_data": true, 00:16:37.843 "copy": false, 00:16:37.843 "nvme_iov_md": false 00:16:37.843 }, 00:16:37.843 "driver_specific": { 00:16:37.843 "lvol": { 00:16:37.843 "lvol_store_uuid": "0f2867d6-fc73-4765-9b5a-eec1d691c893", 00:16:37.843 "base_bdev": "nvme0n1", 00:16:37.843 "thin_provision": true, 00:16:37.843 "num_allocated_clusters": 0, 00:16:37.843 "snapshot": false, 00:16:37.843 "clone": false, 00:16:37.843 "esnap_clone": false 00:16:37.843 } 00:16:37.843 } 00:16:37.843 } 00:16:37.843 ]' 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:37.843 09:45:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:16:38.104 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316d291d-90f4-45cd-acf5-f767c0e19d06 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:38.364 { 00:16:38.364 "name": "316d291d-90f4-45cd-acf5-f767c0e19d06", 00:16:38.364 "aliases": [ 00:16:38.364 "lvs/nvme0n1p0" 00:16:38.364 ], 00:16:38.364 "product_name": "Logical Volume", 00:16:38.364 "block_size": 4096, 00:16:38.364 "num_blocks": 26476544, 00:16:38.364 "uuid": "316d291d-90f4-45cd-acf5-f767c0e19d06", 00:16:38.364 "assigned_rate_limits": { 00:16:38.364 "rw_ios_per_sec": 0, 00:16:38.364 "rw_mbytes_per_sec": 0, 00:16:38.364 "r_mbytes_per_sec": 0, 00:16:38.364 "w_mbytes_per_sec": 0 00:16:38.364 }, 00:16:38.364 "claimed": false, 00:16:38.364 "zoned": false, 00:16:38.364 "supported_io_types": { 00:16:38.364 "read": true, 00:16:38.364 "write": true, 00:16:38.364 "unmap": true, 00:16:38.364 "flush": false, 00:16:38.364 "reset": true, 00:16:38.364 "nvme_admin": false, 00:16:38.364 "nvme_io": false, 00:16:38.364 "nvme_io_md": false, 00:16:38.364 "write_zeroes": true, 00:16:38.364 "zcopy": false, 00:16:38.364 "get_zone_info": false, 00:16:38.364 "zone_management": false, 00:16:38.364 "zone_append": false, 00:16:38.364 "compare": false, 00:16:38.364 "compare_and_write": false, 00:16:38.364 "abort": false, 00:16:38.364 "seek_hole": true, 00:16:38.364 "seek_data": true, 00:16:38.364 "copy": false, 00:16:38.364 "nvme_iov_md": false 00:16:38.364 }, 00:16:38.364 "driver_specific": { 00:16:38.364 "lvol": { 00:16:38.364 "lvol_store_uuid": "0f2867d6-fc73-4765-9b5a-eec1d691c893", 00:16:38.364 "base_bdev": "nvme0n1", 00:16:38.364 "thin_provision": true, 00:16:38.364 "num_allocated_clusters": 0, 00:16:38.364 "snapshot": false, 00:16:38.364 "clone": false, 00:16:38.364 "esnap_clone": false 00:16:38.364 } 00:16:38.364 } 00:16:38.364 } 00:16:38.364 ]' 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:38.364 09:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 316d291d-90f4-45cd-acf5-f767c0e19d06 -c nvc0n1p0 --l2p_dram_limit 20 00:16:38.625 [2024-11-07 09:45:06.058825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.058874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:38.625 [2024-11-07 09:45:06.058888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:38.625 [2024-11-07 09:45:06.058900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.058953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.058966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:38.625 [2024-11-07 09:45:06.058974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:38.625 [2024-11-07 09:45:06.058984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.059001] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:38.625 [2024-11-07 09:45:06.059799] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:38.625 [2024-11-07 09:45:06.059815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.059824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:38.625 [2024-11-07 09:45:06.059833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:16:38.625 [2024-11-07 09:45:06.059842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.059871] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9f863dac-324d-4c69-a17e-d099f809c1e8 00:16:38.625 [2024-11-07 09:45:06.060933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.060966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:38.625 [2024-11-07 09:45:06.060977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:38.625 [2024-11-07 09:45:06.060987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.066160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.066280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:38.625 [2024-11-07 09:45:06.066299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.130 ms 00:16:38.625 [2024-11-07 09:45:06.066309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.066715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.066741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:38.625 [2024-11-07 09:45:06.066757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:38.625 [2024-11-07 09:45:06.066765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.066827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.066838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:38.625 [2024-11-07 09:45:06.066848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:38.625 [2024-11-07 09:45:06.066856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.066879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:38.625 [2024-11-07 09:45:06.070452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.070481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:38.625 [2024-11-07 09:45:06.070490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:16:38.625 [2024-11-07 09:45:06.070502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.070529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.625 [2024-11-07 09:45:06.070539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:38.625 [2024-11-07 09:45:06.070546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:38.625 [2024-11-07 09:45:06.070555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.625 [2024-11-07 09:45:06.070568] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:38.625 [2024-11-07 09:45:06.070720] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:38.625 [2024-11-07 09:45:06.070732] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:38.625 [2024-11-07 09:45:06.070745] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:38.625 [2024-11-07 09:45:06.070755] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:38.626 [2024-11-07 09:45:06.070765] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:38.626 [2024-11-07 09:45:06.070773] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:38.626 [2024-11-07 09:45:06.070782] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:38.626 [2024-11-07 09:45:06.070789] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:38.626 [2024-11-07 09:45:06.070798] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:38.626 [2024-11-07 09:45:06.070807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.626 [2024-11-07 09:45:06.070817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:38.626 [2024-11-07 09:45:06.070825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:16:38.626 [2024-11-07 09:45:06.070833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.626 [2024-11-07 09:45:06.070913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.626 [2024-11-07 09:45:06.070923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:38.626 [2024-11-07 09:45:06.070930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:38.626 [2024-11-07 09:45:06.070940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.626 [2024-11-07 09:45:06.071043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:38.626 [2024-11-07 09:45:06.071059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:38.626 [2024-11-07 09:45:06.071068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:38.626 [2024-11-07 09:45:06.071093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:38.626 [2024-11-07 09:45:06.071115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:38.626 [2024-11-07 09:45:06.071130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:38.626 [2024-11-07 09:45:06.071138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:38.626 [2024-11-07 09:45:06.071144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:38.626 [2024-11-07 09:45:06.071158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:38.626 [2024-11-07 09:45:06.071168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:38.626 [2024-11-07 09:45:06.071178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:38.626 [2024-11-07 09:45:06.071193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:38.626 [2024-11-07 09:45:06.071223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:38.626 [2024-11-07 09:45:06.071245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:38.626 [2024-11-07 09:45:06.071266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:38.626 [2024-11-07 09:45:06.071288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:38.626 [2024-11-07 09:45:06.071310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:38.626 [2024-11-07 09:45:06.071325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:38.626 [2024-11-07 09:45:06.071333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:38.626 [2024-11-07 09:45:06.071339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:38.626 [2024-11-07 09:45:06.071348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:38.626 [2024-11-07 09:45:06.071355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:38.626 [2024-11-07 09:45:06.071364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:38.626 [2024-11-07 09:45:06.071379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:38.626 [2024-11-07 09:45:06.071385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:38.626 [2024-11-07 09:45:06.071401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:38.626 [2024-11-07 09:45:06.071410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:38.626 [2024-11-07 09:45:06.071428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:38.626 [2024-11-07 09:45:06.071435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:38.626 [2024-11-07 09:45:06.071443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:38.626 [2024-11-07 09:45:06.071449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:38.626 [2024-11-07 09:45:06.071458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:38.626 [2024-11-07 09:45:06.071464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:38.626 [2024-11-07 09:45:06.071476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:38.626 [2024-11-07 09:45:06.071485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:38.626 [2024-11-07 09:45:06.071502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:38.626 [2024-11-07 09:45:06.071510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:38.626 [2024-11-07 09:45:06.071517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:38.626 [2024-11-07 09:45:06.071525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:38.626 [2024-11-07 09:45:06.071532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:38.626 [2024-11-07 09:45:06.071540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:38.626 [2024-11-07 09:45:06.071548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:38.626 [2024-11-07 09:45:06.071557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:38.626 [2024-11-07 09:45:06.071564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:38.626 [2024-11-07 09:45:06.071604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:38.626 [2024-11-07 09:45:06.071613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:38.626 [2024-11-07 09:45:06.071641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:38.626 [2024-11-07 09:45:06.071650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:38.627 [2024-11-07 09:45:06.071657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:38.627 [2024-11-07 09:45:06.071666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.627 [2024-11-07 09:45:06.071673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:38.627 [2024-11-07 09:45:06.071682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:16:38.627 [2024-11-07 09:45:06.071690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.627 [2024-11-07 09:45:06.071723] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:38.627 [2024-11-07 09:45:06.071732] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:41.925 [2024-11-07 09:45:09.140210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.140427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:41.925 [2024-11-07 09:45:09.140547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3068.467 ms 00:16:41.925 [2024-11-07 09:45:09.140574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.166227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.166382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.925 [2024-11-07 09:45:09.166442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.364 ms 00:16:41.925 [2024-11-07 09:45:09.166466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.166603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.166641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:41.925 [2024-11-07 09:45:09.166667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:41.925 [2024-11-07 09:45:09.166686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.218439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.218588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.925 [2024-11-07 09:45:09.218674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.692 ms 00:16:41.925 [2024-11-07 09:45:09.218701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.218757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.218780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.925 [2024-11-07 09:45:09.218803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:41.925 [2024-11-07 09:45:09.218878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.219270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.219311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.925 [2024-11-07 09:45:09.219466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:16:41.925 [2024-11-07 09:45:09.219488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.219612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.219652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.925 [2024-11-07 09:45:09.219724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:41.925 [2024-11-07 09:45:09.219748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.232753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.232860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.925 [2024-11-07 09:45:09.232916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.973 ms 00:16:41.925 [2024-11-07 09:45:09.232941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.244276] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:41.925 [2024-11-07 09:45:09.249360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.249462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:41.925 [2024-11-07 09:45:09.249509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.341 ms 00:16:41.925 [2024-11-07 09:45:09.249533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.313397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.313548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:41.925 [2024-11-07 09:45:09.313602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.829 ms 00:16:41.925 [2024-11-07 09:45:09.313638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.313860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.313975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:41.925 [2024-11-07 09:45:09.314009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:41.925 [2024-11-07 09:45:09.314033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.337452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.337570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:41.925 [2024-11-07 09:45:09.337622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:16:41.925 [2024-11-07 09:45:09.337657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.360557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.360684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:41.925 [2024-11-07 09:45:09.360749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.858 ms 00:16:41.925 [2024-11-07 09:45:09.360772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.361346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.361438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:41.925 [2024-11-07 09:45:09.361512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:16:41.925 [2024-11-07 09:45:09.361537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.435842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.436004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:41.925 [2024-11-07 09:45:09.436061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.238 ms 00:16:41.925 [2024-11-07 09:45:09.436087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.461079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.461203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:41.925 [2024-11-07 09:45:09.461259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:16:41.925 [2024-11-07 09:45:09.461283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.484882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.484993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:41.925 [2024-11-07 09:45:09.485008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.558 ms 00:16:41.925 [2024-11-07 09:45:09.485017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.509197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.509235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:41.925 [2024-11-07 09:45:09.509246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.152 ms 00:16:41.925 [2024-11-07 09:45:09.509256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.509292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.509304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:41.925 [2024-11-07 09:45:09.509313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:41.925 [2024-11-07 09:45:09.509322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.925 [2024-11-07 09:45:09.509395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.925 [2024-11-07 09:45:09.509407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:41.925 [2024-11-07 09:45:09.509415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:41.925 [2024-11-07 09:45:09.509424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.926 [2024-11-07 09:45:09.510284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3451.004 ms, result 0 00:16:41.926 { 00:16:41.926 "name": "ftl0", 00:16:41.926 "uuid": "9f863dac-324d-4c69-a17e-d099f809c1e8" 00:16:41.926 } 00:16:41.926 09:45:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:41.926 09:45:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:41.926 09:45:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:42.190 09:45:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:42.190 [2024-11-07 09:45:09.850690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:42.452 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:42.452 Zero copy mechanism will not be used. 00:16:42.452 Running I/O for 4 seconds... 00:16:44.337 1245.00 IOPS, 82.68 MiB/s [2024-11-07T09:45:12.950Z] 1157.50 IOPS, 76.87 MiB/s [2024-11-07T09:45:13.893Z] 1299.33 IOPS, 86.28 MiB/s [2024-11-07T09:45:13.893Z] 1291.50 IOPS, 85.76 MiB/s 00:16:46.222 Latency(us) 00:16:46.222 [2024-11-07T09:45:13.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.222 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:46.222 ftl0 : 4.00 1291.13 85.74 0.00 0.00 808.22 184.32 6704.84 00:16:46.222 [2024-11-07T09:45:13.893Z] =================================================================================================================== 00:16:46.222 [2024-11-07T09:45:13.893Z] Total : 1291.13 85.74 0.00 0.00 808.22 184.32 6704.84 00:16:46.222 [2024-11-07 09:45:13.861208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:46.222 { 00:16:46.222 "results": [ 00:16:46.222 { 00:16:46.222 "job": "ftl0", 00:16:46.222 "core_mask": "0x1", 00:16:46.222 "workload": "randwrite", 00:16:46.222 "status": "finished", 00:16:46.222 "queue_depth": 1, 00:16:46.222 "io_size": 69632, 00:16:46.222 "runtime": 4.001925, 00:16:46.222 "iops": 1291.1286443399115, 00:16:46.222 "mibps": 85.73901153819725, 00:16:46.222 "io_failed": 0, 00:16:46.222 "io_timeout": 0, 00:16:46.222 "avg_latency_us": 808.2210906492385, 00:16:46.223 "min_latency_us": 184.32, 00:16:46.223 "max_latency_us": 6704.836923076923 00:16:46.223 } 00:16:46.223 ], 00:16:46.223 "core_count": 1 00:16:46.223 } 00:16:46.223 09:45:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:46.484 [2024-11-07 09:45:13.972313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:46.484 Running I/O for 4 seconds... 00:16:48.367 6358.00 IOPS, 24.84 MiB/s [2024-11-07T09:45:17.423Z] 6151.00 IOPS, 24.03 MiB/s [2024-11-07T09:45:17.995Z] 5874.67 IOPS, 22.95 MiB/s [2024-11-07T09:45:18.258Z] 5729.25 IOPS, 22.38 MiB/s 00:16:50.587 Latency(us) 00:16:50.587 [2024-11-07T09:45:18.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.587 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.587 ftl0 : 4.03 5712.48 22.31 0.00 0.00 22319.05 264.66 45976.02 00:16:50.587 [2024-11-07T09:45:18.258Z] =================================================================================================================== 00:16:50.587 [2024-11-07T09:45:18.258Z] Total : 5712.48 22.31 0.00 0.00 22319.05 0.00 45976.02 00:16:50.587 [2024-11-07 09:45:18.013095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:50.587 { 00:16:50.587 "results": [ 00:16:50.587 { 00:16:50.587 "job": "ftl0", 00:16:50.587 "core_mask": "0x1", 00:16:50.587 "workload": "randwrite", 00:16:50.587 "status": "finished", 00:16:50.587 "queue_depth": 128, 00:16:50.587 "io_size": 4096, 00:16:50.587 "runtime": 4.03135, 00:16:50.587 "iops": 5712.478450146973, 00:16:50.587 "mibps": 22.314368945886613, 00:16:50.587 "io_failed": 0, 00:16:50.587 "io_timeout": 0, 00:16:50.587 "avg_latency_us": 22319.049984467747, 00:16:50.587 "min_latency_us": 264.6646153846154, 00:16:50.587 "max_latency_us": 45976.02461538462 00:16:50.587 } 00:16:50.587 ], 00:16:50.587 "core_count": 1 00:16:50.587 } 00:16:50.587 09:45:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:50.587 [2024-11-07 09:45:18.120293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:50.587 Running I/O for 4 seconds... 00:16:52.472 4994.00 IOPS, 19.51 MiB/s [2024-11-07T09:45:21.530Z] 4859.50 IOPS, 18.98 MiB/s [2024-11-07T09:45:22.531Z] 4829.00 IOPS, 18.86 MiB/s [2024-11-07T09:45:22.531Z] 4787.00 IOPS, 18.70 MiB/s 00:16:54.860 Latency(us) 00:16:54.860 [2024-11-07T09:45:22.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.860 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:54.860 Verification LBA range: start 0x0 length 0x1400000 00:16:54.860 ftl0 : 4.01 4799.67 18.75 0.00 0.00 26589.40 346.58 69770.63 00:16:54.860 [2024-11-07T09:45:22.531Z] =================================================================================================================== 00:16:54.860 [2024-11-07T09:45:22.531Z] Total : 4799.67 18.75 0.00 0.00 26589.40 0.00 69770.63 00:16:54.860 [2024-11-07 09:45:22.148346] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:16:54.860 "results": [ 00:16:54.860 { 00:16:54.860 "job": "ftl0", 00:16:54.860 "core_mask": "0x1", 00:16:54.860 "workload": "verify", 00:16:54.860 "status": "finished", 00:16:54.860 "verify_range": { 00:16:54.860 "start": 0, 00:16:54.860 "length": 20971520 00:16:54.860 }, 00:16:54.860 "queue_depth": 128, 00:16:54.860 "io_size": 4096, 00:16:54.860 "runtime": 4.013404, 00:16:54.860 "iops": 4799.66631816782, 00:16:54.860 "mibps": 18.748696555343045, 00:16:54.860 "io_failed": 0, 00:16:54.860 "io_timeout": 0, 00:16:54.860 "avg_latency_us": 26589.400994652962, 00:16:54.860 "min_latency_us": 346.5846153846154, 00:16:54.860 "max_latency_us": 69770.63384615384 00:16:54.860 } 00:16:54.860 ], 00:16:54.860 "core_count": 1 00:16:54.860 } 00:16:54.860 l0 00:16:54.860 09:45:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:54.860 [2024-11-07 09:45:22.354322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.860 [2024-11-07 09:45:22.354379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.860 [2024-11-07 09:45:22.354392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:54.860 [2024-11-07 09:45:22.354401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.860 [2024-11-07 09:45:22.354422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.860 [2024-11-07 09:45:22.357058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.860 [2024-11-07 09:45:22.357088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.860 [2024-11-07 09:45:22.357101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:16:54.860 [2024-11-07 09:45:22.357110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.860 [2024-11-07 09:45:22.359170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.860 [2024-11-07 09:45:22.359201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.860 [2024-11-07 09:45:22.359213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:16:54.860 [2024-11-07 09:45:22.359233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.555193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.555267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.123 [2024-11-07 09:45:22.555289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.928 ms 00:16:55.123 [2024-11-07 09:45:22.555298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.561496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.561532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:55.123 [2024-11-07 09:45:22.561545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.160 ms 00:16:55.123 [2024-11-07 09:45:22.561553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.585945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.585991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.123 [2024-11-07 09:45:22.586005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.322 ms 00:16:55.123 [2024-11-07 09:45:22.586014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.602387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.602437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:55.123 [2024-11-07 09:45:22.602451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.330 ms 00:16:55.123 [2024-11-07 09:45:22.602460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.602611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.602622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:55.123 [2024-11-07 09:45:22.602662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:55.123 [2024-11-07 09:45:22.602670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.625784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.625825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:55.123 [2024-11-07 09:45:22.625839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.095 ms 00:16:55.123 [2024-11-07 09:45:22.625846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.648949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.649105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:55.123 [2024-11-07 09:45:22.649126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.061 ms 00:16:55.123 [2024-11-07 09:45:22.649134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.672329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.672371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:55.123 [2024-11-07 09:45:22.672384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.156 ms 00:16:55.123 [2024-11-07 09:45:22.672392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.695278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.123 [2024-11-07 09:45:22.695415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:55.123 [2024-11-07 09:45:22.695438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.794 ms 00:16:55.123 [2024-11-07 09:45:22.695446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.123 [2024-11-07 09:45:22.695480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:55.123 [2024-11-07 09:45:22.695496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:55.123 [2024-11-07 09:45:22.695927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.695992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:55.124 [2024-11-07 09:45:22.696375] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:55.124 [2024-11-07 09:45:22.696385] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9f863dac-324d-4c69-a17e-d099f809c1e8 00:16:55.124 [2024-11-07 09:45:22.696392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:55.124 [2024-11-07 09:45:22.696403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:55.124 [2024-11-07 09:45:22.696410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:55.124 [2024-11-07 09:45:22.696420] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:55.124 [2024-11-07 09:45:22.696427] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:55.124 [2024-11-07 09:45:22.696435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:55.124 [2024-11-07 09:45:22.696442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:55.124 [2024-11-07 09:45:22.696452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:55.124 [2024-11-07 09:45:22.696459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:55.124 [2024-11-07 09:45:22.696468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.124 [2024-11-07 09:45:22.696475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:55.124 [2024-11-07 09:45:22.696485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:16:55.124 [2024-11-07 09:45:22.696492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.124 [2024-11-07 09:45:22.708951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.124 [2024-11-07 09:45:22.708985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:55.124 [2024-11-07 09:45:22.708998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.401 ms 00:16:55.124 [2024-11-07 09:45:22.709006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.124 [2024-11-07 09:45:22.709366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.124 [2024-11-07 09:45:22.709375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:55.124 [2024-11-07 09:45:22.709385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:16:55.124 [2024-11-07 09:45:22.709392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.124 [2024-11-07 09:45:22.743955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.124 [2024-11-07 09:45:22.743993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.124 [2024-11-07 09:45:22.744009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.124 [2024-11-07 09:45:22.744018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.124 [2024-11-07 09:45:22.744084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.124 [2024-11-07 09:45:22.744092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.124 [2024-11-07 09:45:22.744102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.124 [2024-11-07 09:45:22.744109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.124 [2024-11-07 09:45:22.744187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.124 [2024-11-07 09:45:22.744197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.124 [2024-11-07 09:45:22.744207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.124 [2024-11-07 09:45:22.744214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.124 [2024-11-07 09:45:22.744231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.124 [2024-11-07 09:45:22.744239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.124 [2024-11-07 09:45:22.744248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.124 [2024-11-07 09:45:22.744255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.820541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.820592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.387 [2024-11-07 09:45:22.820607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.820615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.882883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.882931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.387 [2024-11-07 09:45:22.882944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.882952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.883055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.387 [2024-11-07 09:45:22.883064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.883071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.883122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.387 [2024-11-07 09:45:22.883132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.883139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.883259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.387 [2024-11-07 09:45:22.883272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.883279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.883317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:55.387 [2024-11-07 09:45:22.883327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.883333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.883376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.387 [2024-11-07 09:45:22.883387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.883394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.387 [2024-11-07 09:45:22.883452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.387 [2024-11-07 09:45:22.883461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.387 [2024-11-07 09:45:22.883468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.387 [2024-11-07 09:45:22.883588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.227 ms, result 0 00:16:55.387 true 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73323 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73323 ']' 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73323 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73323 00:16:55.387 killing process with pid 73323 00:16:55.387 Received shutdown signal, test time was about 4.000000 seconds 00:16:55.387 00:16:55.387 Latency(us) 00:16:55.387 [2024-11-07T09:45:23.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.387 [2024-11-07T09:45:23.058Z] =================================================================================================================== 00:16:55.387 [2024-11-07T09:45:23.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73323' 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73323 00:16:55.387 09:45:22 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73323 00:16:57.935 Remove shared memory files 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:57.935 ************************************ 00:16:57.935 END TEST ftl_bdevperf 00:16:57.935 ************************************ 00:16:57.935 00:16:57.935 real 0m23.041s 00:16:57.935 user 0m25.680s 00:16:57.935 sys 0m0.868s 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:57.935 09:45:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:57.935 09:45:25 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:57.935 09:45:25 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:57.935 09:45:25 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:57.935 09:45:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:57.935 ************************************ 00:16:57.935 START TEST ftl_trim 00:16:57.935 ************************************ 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:57.935 * Looking for test storage... 00:16:57.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.935 09:45:25 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:57.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.935 --rc genhtml_branch_coverage=1 00:16:57.935 --rc genhtml_function_coverage=1 00:16:57.935 --rc genhtml_legend=1 00:16:57.935 --rc geninfo_all_blocks=1 00:16:57.935 --rc geninfo_unexecuted_blocks=1 00:16:57.935 00:16:57.935 ' 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:57.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.935 --rc genhtml_branch_coverage=1 00:16:57.935 --rc genhtml_function_coverage=1 00:16:57.935 --rc genhtml_legend=1 00:16:57.935 --rc geninfo_all_blocks=1 00:16:57.935 --rc geninfo_unexecuted_blocks=1 00:16:57.935 00:16:57.935 ' 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:57.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.935 --rc genhtml_branch_coverage=1 00:16:57.935 --rc genhtml_function_coverage=1 00:16:57.935 --rc genhtml_legend=1 00:16:57.935 --rc geninfo_all_blocks=1 00:16:57.935 --rc geninfo_unexecuted_blocks=1 00:16:57.935 00:16:57.935 ' 00:16:57.935 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:57.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.935 --rc genhtml_branch_coverage=1 00:16:57.935 --rc genhtml_function_coverage=1 00:16:57.935 --rc genhtml_legend=1 00:16:57.935 --rc geninfo_all_blocks=1 00:16:57.935 --rc geninfo_unexecuted_blocks=1 00:16:57.935 00:16:57.935 ' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.935 09:45:25 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73675 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73675 00:16:57.936 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73675 ']' 00:16:57.936 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.936 09:45:25 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:57.936 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:57.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.936 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.936 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:57.936 09:45:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:57.936 [2024-11-07 09:45:25.500318] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:57.936 [2024-11-07 09:45:25.500441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:16:58.196 [2024-11-07 09:45:25.662372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:58.196 [2024-11-07 09:45:25.768430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.196 [2024-11-07 09:45:25.768714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.197 [2024-11-07 09:45:25.768728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.770 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:58.770 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:16:58.770 09:45:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:58.770 09:45:26 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:58.770 09:45:26 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:58.770 09:45:26 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:58.770 09:45:26 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:58.770 09:45:26 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:59.030 09:45:26 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:59.030 09:45:26 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:59.030 09:45:26 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:59.030 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:59.030 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:59.030 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:16:59.030 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:16:59.030 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:59.290 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:59.290 { 00:16:59.290 "name": "nvme0n1", 00:16:59.290 "aliases": [ 00:16:59.290 "b88a8054-0bca-4aa5-a109-503442937460" 00:16:59.290 ], 00:16:59.290 "product_name": "NVMe disk", 00:16:59.290 "block_size": 4096, 00:16:59.290 "num_blocks": 1310720, 00:16:59.290 "uuid": "b88a8054-0bca-4aa5-a109-503442937460", 00:16:59.290 "numa_id": -1, 00:16:59.290 "assigned_rate_limits": { 00:16:59.290 "rw_ios_per_sec": 0, 00:16:59.290 "rw_mbytes_per_sec": 0, 00:16:59.290 "r_mbytes_per_sec": 0, 00:16:59.290 "w_mbytes_per_sec": 0 00:16:59.290 }, 00:16:59.290 "claimed": true, 00:16:59.290 "claim_type": "read_many_write_one", 00:16:59.290 "zoned": false, 00:16:59.290 "supported_io_types": { 00:16:59.290 "read": true, 00:16:59.290 "write": true, 00:16:59.290 "unmap": true, 00:16:59.290 "flush": true, 00:16:59.290 "reset": true, 00:16:59.290 "nvme_admin": true, 00:16:59.290 "nvme_io": true, 00:16:59.290 "nvme_io_md": false, 00:16:59.290 "write_zeroes": true, 00:16:59.290 "zcopy": false, 00:16:59.290 "get_zone_info": false, 00:16:59.290 "zone_management": false, 00:16:59.290 "zone_append": false, 00:16:59.290 "compare": true, 00:16:59.290 "compare_and_write": false, 00:16:59.290 "abort": true, 00:16:59.290 "seek_hole": false, 00:16:59.290 "seek_data": false, 00:16:59.290 "copy": true, 00:16:59.290 "nvme_iov_md": false 00:16:59.290 }, 00:16:59.290 "driver_specific": { 00:16:59.290 "nvme": [ 00:16:59.290 { 00:16:59.290 "pci_address": "0000:00:11.0", 00:16:59.290 "trid": { 00:16:59.290 "trtype": "PCIe", 00:16:59.290 "traddr": "0000:00:11.0" 00:16:59.290 }, 00:16:59.290 "ctrlr_data": { 00:16:59.290 "cntlid": 0, 00:16:59.290 "vendor_id": "0x1b36", 00:16:59.290 "model_number": "QEMU NVMe Ctrl", 00:16:59.290 "serial_number": "12341", 00:16:59.290 "firmware_revision": "8.0.0", 00:16:59.290 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:59.291 "oacs": { 00:16:59.291 "security": 0, 00:16:59.291 "format": 1, 00:16:59.291 "firmware": 0, 00:16:59.291 "ns_manage": 1 00:16:59.291 }, 00:16:59.291 "multi_ctrlr": false, 00:16:59.291 "ana_reporting": false 00:16:59.291 }, 00:16:59.291 "vs": { 00:16:59.291 "nvme_version": "1.4" 00:16:59.291 }, 00:16:59.291 "ns_data": { 00:16:59.291 "id": 1, 00:16:59.291 "can_share": false 00:16:59.291 } 00:16:59.291 } 00:16:59.291 ], 00:16:59.291 "mp_policy": "active_passive" 00:16:59.291 } 00:16:59.291 } 00:16:59.291 ]' 00:16:59.291 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:59.291 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:16:59.291 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:59.291 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:59.291 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:59.291 09:45:26 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:16:59.291 09:45:26 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:59.291 09:45:26 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:59.291 09:45:26 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:59.291 09:45:26 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:59.291 09:45:26 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:59.550 09:45:27 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=0f2867d6-fc73-4765-9b5a-eec1d691c893 00:16:59.550 09:45:27 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:59.550 09:45:27 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f2867d6-fc73-4765-9b5a-eec1d691c893 00:16:59.809 09:45:27 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:00.069 09:45:27 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ce3639a5-1101-482b-a165-1030bb736d6f 00:17:00.069 09:45:27 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ce3639a5-1101-482b-a165-1030bb736d6f 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:00.326 09:45:27 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.326 09:45:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.326 09:45:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:00.326 09:45:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:00.326 09:45:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:00.326 09:45:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.586 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:00.586 { 00:17:00.586 "name": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:00.586 "aliases": [ 00:17:00.586 "lvs/nvme0n1p0" 00:17:00.586 ], 00:17:00.586 "product_name": "Logical Volume", 00:17:00.586 "block_size": 4096, 00:17:00.586 "num_blocks": 26476544, 00:17:00.586 "uuid": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:00.586 "assigned_rate_limits": { 00:17:00.586 "rw_ios_per_sec": 0, 00:17:00.586 "rw_mbytes_per_sec": 0, 00:17:00.586 "r_mbytes_per_sec": 0, 00:17:00.586 "w_mbytes_per_sec": 0 00:17:00.586 }, 00:17:00.586 "claimed": false, 00:17:00.586 "zoned": false, 00:17:00.586 "supported_io_types": { 00:17:00.586 "read": true, 00:17:00.586 "write": true, 00:17:00.586 "unmap": true, 00:17:00.586 "flush": false, 00:17:00.586 "reset": true, 00:17:00.586 "nvme_admin": false, 00:17:00.586 "nvme_io": false, 00:17:00.586 "nvme_io_md": false, 00:17:00.586 "write_zeroes": true, 00:17:00.586 "zcopy": false, 00:17:00.586 "get_zone_info": false, 00:17:00.586 "zone_management": false, 00:17:00.586 "zone_append": false, 00:17:00.586 "compare": false, 00:17:00.586 "compare_and_write": false, 00:17:00.586 "abort": false, 00:17:00.586 "seek_hole": true, 00:17:00.587 "seek_data": true, 00:17:00.587 "copy": false, 00:17:00.587 "nvme_iov_md": false 00:17:00.587 }, 00:17:00.587 "driver_specific": { 00:17:00.587 "lvol": { 00:17:00.587 "lvol_store_uuid": "ce3639a5-1101-482b-a165-1030bb736d6f", 00:17:00.587 "base_bdev": "nvme0n1", 00:17:00.587 "thin_provision": true, 00:17:00.587 "num_allocated_clusters": 0, 00:17:00.587 "snapshot": false, 00:17:00.587 "clone": false, 00:17:00.587 "esnap_clone": false 00:17:00.587 } 00:17:00.587 } 00:17:00.587 } 00:17:00.587 ]' 00:17:00.587 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:00.587 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:00.587 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:00.587 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:00.587 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:00.587 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:00.587 09:45:28 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:00.587 09:45:28 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:00.587 09:45:28 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:00.844 09:45:28 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:00.844 09:45:28 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:00.844 09:45:28 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.844 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:00.844 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:00.845 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:00.845 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:00.845 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:01.103 { 00:17:01.103 "name": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:01.103 "aliases": [ 00:17:01.103 "lvs/nvme0n1p0" 00:17:01.103 ], 00:17:01.103 "product_name": "Logical Volume", 00:17:01.103 "block_size": 4096, 00:17:01.103 "num_blocks": 26476544, 00:17:01.103 "uuid": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:01.103 "assigned_rate_limits": { 00:17:01.103 "rw_ios_per_sec": 0, 00:17:01.103 "rw_mbytes_per_sec": 0, 00:17:01.103 "r_mbytes_per_sec": 0, 00:17:01.103 "w_mbytes_per_sec": 0 00:17:01.103 }, 00:17:01.103 "claimed": false, 00:17:01.103 "zoned": false, 00:17:01.103 "supported_io_types": { 00:17:01.103 "read": true, 00:17:01.103 "write": true, 00:17:01.103 "unmap": true, 00:17:01.103 "flush": false, 00:17:01.103 "reset": true, 00:17:01.103 "nvme_admin": false, 00:17:01.103 "nvme_io": false, 00:17:01.103 "nvme_io_md": false, 00:17:01.103 "write_zeroes": true, 00:17:01.103 "zcopy": false, 00:17:01.103 "get_zone_info": false, 00:17:01.103 "zone_management": false, 00:17:01.103 "zone_append": false, 00:17:01.103 "compare": false, 00:17:01.103 "compare_and_write": false, 00:17:01.103 "abort": false, 00:17:01.103 "seek_hole": true, 00:17:01.103 "seek_data": true, 00:17:01.103 "copy": false, 00:17:01.103 "nvme_iov_md": false 00:17:01.103 }, 00:17:01.103 "driver_specific": { 00:17:01.103 "lvol": { 00:17:01.103 "lvol_store_uuid": "ce3639a5-1101-482b-a165-1030bb736d6f", 00:17:01.103 "base_bdev": "nvme0n1", 00:17:01.103 "thin_provision": true, 00:17:01.103 "num_allocated_clusters": 0, 00:17:01.103 "snapshot": false, 00:17:01.103 "clone": false, 00:17:01.103 "esnap_clone": false 00:17:01.103 } 00:17:01.103 } 00:17:01.103 } 00:17:01.103 ]' 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:01.103 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:01.103 09:45:28 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:01.103 09:45:28 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:01.361 09:45:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:01.361 09:45:28 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:01.361 09:45:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:01.361 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:01.361 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:01.361 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:01.361 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:01.361 09:45:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 00:17:01.618 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:01.618 { 00:17:01.618 "name": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:01.618 "aliases": [ 00:17:01.618 "lvs/nvme0n1p0" 00:17:01.618 ], 00:17:01.618 "product_name": "Logical Volume", 00:17:01.618 "block_size": 4096, 00:17:01.618 "num_blocks": 26476544, 00:17:01.619 "uuid": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:01.619 "assigned_rate_limits": { 00:17:01.619 "rw_ios_per_sec": 0, 00:17:01.619 "rw_mbytes_per_sec": 0, 00:17:01.619 "r_mbytes_per_sec": 0, 00:17:01.619 "w_mbytes_per_sec": 0 00:17:01.619 }, 00:17:01.619 "claimed": false, 00:17:01.619 "zoned": false, 00:17:01.619 "supported_io_types": { 00:17:01.619 "read": true, 00:17:01.619 "write": true, 00:17:01.619 "unmap": true, 00:17:01.619 "flush": false, 00:17:01.619 "reset": true, 00:17:01.619 "nvme_admin": false, 00:17:01.619 "nvme_io": false, 00:17:01.619 "nvme_io_md": false, 00:17:01.619 "write_zeroes": true, 00:17:01.619 "zcopy": false, 00:17:01.619 "get_zone_info": false, 00:17:01.619 "zone_management": false, 00:17:01.619 "zone_append": false, 00:17:01.619 "compare": false, 00:17:01.619 "compare_and_write": false, 00:17:01.619 "abort": false, 00:17:01.619 "seek_hole": true, 00:17:01.619 "seek_data": true, 00:17:01.619 "copy": false, 00:17:01.619 "nvme_iov_md": false 00:17:01.619 }, 00:17:01.619 "driver_specific": { 00:17:01.619 "lvol": { 00:17:01.619 "lvol_store_uuid": "ce3639a5-1101-482b-a165-1030bb736d6f", 00:17:01.619 "base_bdev": "nvme0n1", 00:17:01.619 "thin_provision": true, 00:17:01.619 "num_allocated_clusters": 0, 00:17:01.619 "snapshot": false, 00:17:01.619 "clone": false, 00:17:01.619 "esnap_clone": false 00:17:01.619 } 00:17:01.619 } 00:17:01.619 } 00:17:01.619 ]' 00:17:01.619 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:01.619 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:01.619 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:01.619 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:01.619 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:01.619 09:45:29 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:01.619 09:45:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:01.619 09:45:29 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d222cebd-eb58-4c11-b0b0-b2ce9f5ea863 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:01.882 [2024-11-07 09:45:29.302982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.882 [2024-11-07 09:45:29.303036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.882 [2024-11-07 09:45:29.303054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.882 [2024-11-07 09:45:29.303063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.882 [2024-11-07 09:45:29.305892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.882 [2024-11-07 09:45:29.305929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.882 [2024-11-07 09:45:29.305941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:17:01.882 [2024-11-07 09:45:29.305949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.882 [2024-11-07 09:45:29.306117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.882 [2024-11-07 09:45:29.306848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.882 [2024-11-07 09:45:29.306876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.306885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.883 [2024-11-07 09:45:29.306895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:17:01.883 [2024-11-07 09:45:29.306902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.307012] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:01.883 [2024-11-07 09:45:29.308106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.308139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:01.883 [2024-11-07 09:45:29.308148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:01.883 [2024-11-07 09:45:29.308157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.313568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.313737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.883 [2024-11-07 09:45:29.313755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:17:01.883 [2024-11-07 09:45:29.313767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.313893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.313911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.883 [2024-11-07 09:45:29.313919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:01.883 [2024-11-07 09:45:29.313932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.313972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.313982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.883 [2024-11-07 09:45:29.313990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:01.883 [2024-11-07 09:45:29.313999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.314029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.883 [2024-11-07 09:45:29.317604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.317717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.883 [2024-11-07 09:45:29.317738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:17:01.883 [2024-11-07 09:45:29.317746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.317807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.317816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.883 [2024-11-07 09:45:29.317826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:01.883 [2024-11-07 09:45:29.317845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.317884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:01.883 [2024-11-07 09:45:29.318018] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.883 [2024-11-07 09:45:29.318038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.883 [2024-11-07 09:45:29.318050] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.883 [2024-11-07 09:45:29.318062] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318070] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.883 [2024-11-07 09:45:29.318087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.883 [2024-11-07 09:45:29.318096] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.883 [2024-11-07 09:45:29.318105] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.883 [2024-11-07 09:45:29.318115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.318122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.883 [2024-11-07 09:45:29.318131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:01.883 [2024-11-07 09:45:29.318138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.318232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.883 [2024-11-07 09:45:29.318240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.883 [2024-11-07 09:45:29.318249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:01.883 [2024-11-07 09:45:29.318257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.883 [2024-11-07 09:45:29.318382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.883 [2024-11-07 09:45:29.318391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.883 [2024-11-07 09:45:29.318400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.883 [2024-11-07 09:45:29.318424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.883 [2024-11-07 09:45:29.318447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.883 [2024-11-07 09:45:29.318462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.883 [2024-11-07 09:45:29.318468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.883 [2024-11-07 09:45:29.318476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.883 [2024-11-07 09:45:29.318483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.883 [2024-11-07 09:45:29.318491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:01.883 [2024-11-07 09:45:29.318497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.883 [2024-11-07 09:45:29.318514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.883 [2024-11-07 09:45:29.318537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.883 [2024-11-07 09:45:29.318558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.883 [2024-11-07 09:45:29.318580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.883 [2024-11-07 09:45:29.318600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.883 [2024-11-07 09:45:29.318623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.883 [2024-11-07 09:45:29.318651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.883 [2024-11-07 09:45:29.318658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:01.883 [2024-11-07 09:45:29.318666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.883 [2024-11-07 09:45:29.318673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.883 [2024-11-07 09:45:29.318681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:01.883 [2024-11-07 09:45:29.318687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.883 [2024-11-07 09:45:29.318701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:01.883 [2024-11-07 09:45:29.318709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318715] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.883 [2024-11-07 09:45:29.318724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.883 [2024-11-07 09:45:29.318731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.883 [2024-11-07 09:45:29.318748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.883 [2024-11-07 09:45:29.318759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.883 [2024-11-07 09:45:29.318766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.883 [2024-11-07 09:45:29.318775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.883 [2024-11-07 09:45:29.318781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.883 [2024-11-07 09:45:29.318789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.883 [2024-11-07 09:45:29.318799] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.883 [2024-11-07 09:45:29.318809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.884 [2024-11-07 09:45:29.318826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:01.884 [2024-11-07 09:45:29.318833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.884 [2024-11-07 09:45:29.318841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:01.884 [2024-11-07 09:45:29.318848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:01.884 [2024-11-07 09:45:29.318857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:01.884 [2024-11-07 09:45:29.318864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:01.884 [2024-11-07 09:45:29.318872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:01.884 [2024-11-07 09:45:29.318879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:01.884 [2024-11-07 09:45:29.318889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:01.884 [2024-11-07 09:45:29.318929] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.884 [2024-11-07 09:45:29.318944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.884 [2024-11-07 09:45:29.318961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.884 [2024-11-07 09:45:29.318968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.884 [2024-11-07 09:45:29.318976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.884 [2024-11-07 09:45:29.318984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.884 [2024-11-07 09:45:29.318993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.884 [2024-11-07 09:45:29.319000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:17:01.884 [2024-11-07 09:45:29.319009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.884 [2024-11-07 09:45:29.319075] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:01.884 [2024-11-07 09:45:29.319088] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:04.412 [2024-11-07 09:45:31.500438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.412 [2024-11-07 09:45:31.500505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:04.412 [2024-11-07 09:45:31.500520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2181.353 ms 00:17:04.412 [2024-11-07 09:45:31.500530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.412 [2024-11-07 09:45:31.526064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.412 [2024-11-07 09:45:31.526119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.412 [2024-11-07 09:45:31.526132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.271 ms 00:17:04.412 [2024-11-07 09:45:31.526141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.412 [2024-11-07 09:45:31.526297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.412 [2024-11-07 09:45:31.526309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.412 [2024-11-07 09:45:31.526318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:04.412 [2024-11-07 09:45:31.526329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.412 [2024-11-07 09:45:31.576737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.412 [2024-11-07 09:45:31.576972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.412 [2024-11-07 09:45:31.576992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.356 ms 00:17:04.412 [2024-11-07 09:45:31.577004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.412 [2024-11-07 09:45:31.577107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.412 [2024-11-07 09:45:31.577121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.412 [2024-11-07 09:45:31.577129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:04.412 [2024-11-07 09:45:31.577139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.412 [2024-11-07 09:45:31.577472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.577491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.413 [2024-11-07 09:45:31.577501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:17:04.413 [2024-11-07 09:45:31.577509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.577662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.577673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.413 [2024-11-07 09:45:31.577681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:04.413 [2024-11-07 09:45:31.577692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.591987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.592153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.413 [2024-11-07 09:45:31.592169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.255 ms 00:17:04.413 [2024-11-07 09:45:31.592178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.603553] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:04.413 [2024-11-07 09:45:31.617977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.618027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:04.413 [2024-11-07 09:45:31.618041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.673 ms 00:17:04.413 [2024-11-07 09:45:31.618049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.682842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.683047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:04.413 [2024-11-07 09:45:31.683072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.703 ms 00:17:04.413 [2024-11-07 09:45:31.683081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.683325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.683344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:04.413 [2024-11-07 09:45:31.683358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:04.413 [2024-11-07 09:45:31.683366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.707528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.707573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:04.413 [2024-11-07 09:45:31.707589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.126 ms 00:17:04.413 [2024-11-07 09:45:31.707596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.730405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.730595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:04.413 [2024-11-07 09:45:31.730617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.654 ms 00:17:04.413 [2024-11-07 09:45:31.730625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.731373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.731394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.413 [2024-11-07 09:45:31.731406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:17:04.413 [2024-11-07 09:45:31.731413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.799694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.799747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:04.413 [2024-11-07 09:45:31.799766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.225 ms 00:17:04.413 [2024-11-07 09:45:31.799774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.824657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.824859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:04.413 [2024-11-07 09:45:31.824880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.781 ms 00:17:04.413 [2024-11-07 09:45:31.824889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.849622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.849681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:04.413 [2024-11-07 09:45:31.849697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.666 ms 00:17:04.413 [2024-11-07 09:45:31.849705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.873839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.873888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:04.413 [2024-11-07 09:45:31.873903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.043 ms 00:17:04.413 [2024-11-07 09:45:31.873922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.874004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.874016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:04.413 [2024-11-07 09:45:31.874029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:04.413 [2024-11-07 09:45:31.874036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.874113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.413 [2024-11-07 09:45:31.874122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:04.413 [2024-11-07 09:45:31.874132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:04.413 [2024-11-07 09:45:31.874139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.413 [2024-11-07 09:45:31.874947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.413 [2024-11-07 09:45:31.878229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2571.644 ms, result 0 00:17:04.413 [2024-11-07 09:45:31.878971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.413 { 00:17:04.413 "name": "ftl0", 00:17:04.413 "uuid": "5c1cbb46-fd78-473b-92db-2b008b37049d" 00:17:04.413 } 00:17:04.413 09:45:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:04.413 09:45:31 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:04.413 09:45:31 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:04.413 09:45:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:17:04.413 09:45:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:04.413 09:45:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:04.413 09:45:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:04.671 09:45:32 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:04.930 [ 00:17:04.930 { 00:17:04.930 "name": "ftl0", 00:17:04.930 "aliases": [ 00:17:04.930 "5c1cbb46-fd78-473b-92db-2b008b37049d" 00:17:04.930 ], 00:17:04.930 "product_name": "FTL disk", 00:17:04.930 "block_size": 4096, 00:17:04.930 "num_blocks": 23592960, 00:17:04.930 "uuid": "5c1cbb46-fd78-473b-92db-2b008b37049d", 00:17:04.930 "assigned_rate_limits": { 00:17:04.930 "rw_ios_per_sec": 0, 00:17:04.930 "rw_mbytes_per_sec": 0, 00:17:04.930 "r_mbytes_per_sec": 0, 00:17:04.930 "w_mbytes_per_sec": 0 00:17:04.930 }, 00:17:04.930 "claimed": false, 00:17:04.930 "zoned": false, 00:17:04.930 "supported_io_types": { 00:17:04.930 "read": true, 00:17:04.930 "write": true, 00:17:04.930 "unmap": true, 00:17:04.930 "flush": true, 00:17:04.930 "reset": false, 00:17:04.930 "nvme_admin": false, 00:17:04.930 "nvme_io": false, 00:17:04.930 "nvme_io_md": false, 00:17:04.930 "write_zeroes": true, 00:17:04.930 "zcopy": false, 00:17:04.930 "get_zone_info": false, 00:17:04.930 "zone_management": false, 00:17:04.930 "zone_append": false, 00:17:04.930 "compare": false, 00:17:04.930 "compare_and_write": false, 00:17:04.930 "abort": false, 00:17:04.930 "seek_hole": false, 00:17:04.930 "seek_data": false, 00:17:04.930 "copy": false, 00:17:04.930 "nvme_iov_md": false 00:17:04.930 }, 00:17:04.930 "driver_specific": { 00:17:04.930 "ftl": { 00:17:04.930 "base_bdev": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:04.930 "cache": "nvc0n1p0" 00:17:04.930 } 00:17:04.930 } 00:17:04.930 } 00:17:04.930 ] 00:17:04.930 09:45:32 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:17:04.930 09:45:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:04.930 09:45:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:04.930 09:45:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:04.930 09:45:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:05.188 09:45:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:05.188 { 00:17:05.188 "name": "ftl0", 00:17:05.188 "aliases": [ 00:17:05.188 "5c1cbb46-fd78-473b-92db-2b008b37049d" 00:17:05.188 ], 00:17:05.188 "product_name": "FTL disk", 00:17:05.188 "block_size": 4096, 00:17:05.188 "num_blocks": 23592960, 00:17:05.188 "uuid": "5c1cbb46-fd78-473b-92db-2b008b37049d", 00:17:05.188 "assigned_rate_limits": { 00:17:05.188 "rw_ios_per_sec": 0, 00:17:05.188 "rw_mbytes_per_sec": 0, 00:17:05.188 "r_mbytes_per_sec": 0, 00:17:05.188 "w_mbytes_per_sec": 0 00:17:05.188 }, 00:17:05.188 "claimed": false, 00:17:05.188 "zoned": false, 00:17:05.188 "supported_io_types": { 00:17:05.188 "read": true, 00:17:05.188 "write": true, 00:17:05.188 "unmap": true, 00:17:05.188 "flush": true, 00:17:05.188 "reset": false, 00:17:05.188 "nvme_admin": false, 00:17:05.188 "nvme_io": false, 00:17:05.188 "nvme_io_md": false, 00:17:05.188 "write_zeroes": true, 00:17:05.188 "zcopy": false, 00:17:05.188 "get_zone_info": false, 00:17:05.188 "zone_management": false, 00:17:05.188 "zone_append": false, 00:17:05.188 "compare": false, 00:17:05.188 "compare_and_write": false, 00:17:05.188 "abort": false, 00:17:05.188 "seek_hole": false, 00:17:05.189 "seek_data": false, 00:17:05.189 "copy": false, 00:17:05.189 "nvme_iov_md": false 00:17:05.189 }, 00:17:05.189 "driver_specific": { 00:17:05.189 "ftl": { 00:17:05.189 "base_bdev": "d222cebd-eb58-4c11-b0b0-b2ce9f5ea863", 00:17:05.189 "cache": "nvc0n1p0" 00:17:05.189 } 00:17:05.189 } 00:17:05.189 } 00:17:05.189 ]' 00:17:05.189 09:45:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:05.189 09:45:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:05.189 09:45:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:05.448 [2024-11-07 09:45:33.010790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.448 [2024-11-07 09:45:33.010845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:05.448 [2024-11-07 09:45:33.010860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.448 [2024-11-07 09:45:33.010871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.448 [2024-11-07 09:45:33.010901] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:05.448 [2024-11-07 09:45:33.013481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.013515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:05.449 [2024-11-07 09:45:33.013530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:17:05.449 [2024-11-07 09:45:33.013538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.014001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.014017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:05.449 [2024-11-07 09:45:33.014027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:17:05.449 [2024-11-07 09:45:33.014035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.017687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.017713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:05.449 [2024-11-07 09:45:33.017724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:17:05.449 [2024-11-07 09:45:33.017733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.024741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.024773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:05.449 [2024-11-07 09:45:33.024785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.963 ms 00:17:05.449 [2024-11-07 09:45:33.024794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.048800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.048849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:05.449 [2024-11-07 09:45:33.048866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.925 ms 00:17:05.449 [2024-11-07 09:45:33.048874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.063665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.063712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:05.449 [2024-11-07 09:45:33.063726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.713 ms 00:17:05.449 [2024-11-07 09:45:33.063737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.063939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.063950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:05.449 [2024-11-07 09:45:33.063960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:05.449 [2024-11-07 09:45:33.063967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.087910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.087959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:05.449 [2024-11-07 09:45:33.087973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.912 ms 00:17:05.449 [2024-11-07 09:45:33.087981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.449 [2024-11-07 09:45:33.110964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.449 [2024-11-07 09:45:33.111010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:05.449 [2024-11-07 09:45:33.111027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.899 ms 00:17:05.449 [2024-11-07 09:45:33.111034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.713 [2024-11-07 09:45:33.134534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.713 [2024-11-07 09:45:33.134580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:05.713 [2024-11-07 09:45:33.134594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.424 ms 00:17:05.713 [2024-11-07 09:45:33.134602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.713 [2024-11-07 09:45:33.157214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.713 [2024-11-07 09:45:33.157258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:05.713 [2024-11-07 09:45:33.157272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.483 ms 00:17:05.713 [2024-11-07 09:45:33.157280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.713 [2024-11-07 09:45:33.157344] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:05.713 [2024-11-07 09:45:33.157359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:05.713 [2024-11-07 09:45:33.157656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.157999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:05.714 [2024-11-07 09:45:33.158233] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:05.714 [2024-11-07 09:45:33.158243] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:05.714 [2024-11-07 09:45:33.158251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:05.714 [2024-11-07 09:45:33.158259] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:05.714 [2024-11-07 09:45:33.158266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:05.714 [2024-11-07 09:45:33.158275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:05.714 [2024-11-07 09:45:33.158284] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:05.714 [2024-11-07 09:45:33.158293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:05.714 [2024-11-07 09:45:33.158300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:05.714 [2024-11-07 09:45:33.158307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:05.714 [2024-11-07 09:45:33.158313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:05.714 [2024-11-07 09:45:33.158322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.714 [2024-11-07 09:45:33.158330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:05.714 [2024-11-07 09:45:33.158340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:17:05.714 [2024-11-07 09:45:33.158347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.714 [2024-11-07 09:45:33.170672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.714 [2024-11-07 09:45:33.170714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:05.714 [2024-11-07 09:45:33.170732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.296 ms 00:17:05.714 [2024-11-07 09:45:33.170740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.714 [2024-11-07 09:45:33.171134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.714 [2024-11-07 09:45:33.171150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:05.714 [2024-11-07 09:45:33.171160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:05.714 [2024-11-07 09:45:33.171167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.714 [2024-11-07 09:45:33.214604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.714 [2024-11-07 09:45:33.214659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.714 [2024-11-07 09:45:33.214671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.714 [2024-11-07 09:45:33.214679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.214789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.214799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.715 [2024-11-07 09:45:33.214808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.214815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.214883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.214893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.715 [2024-11-07 09:45:33.214906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.214913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.214937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.214945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.715 [2024-11-07 09:45:33.214954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.214961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.295293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.295344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.715 [2024-11-07 09:45:33.295356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.295364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.357447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.357497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.715 [2024-11-07 09:45:33.357510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.357518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.357592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.357602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.715 [2024-11-07 09:45:33.357649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.357660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.357707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.357716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.715 [2024-11-07 09:45:33.357724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.357731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.357836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.357846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.715 [2024-11-07 09:45:33.357855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.357862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.357911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.357920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:05.715 [2024-11-07 09:45:33.357930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.357937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.357985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.357993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.715 [2024-11-07 09:45:33.358004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.358011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.358063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.715 [2024-11-07 09:45:33.358073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.715 [2024-11-07 09:45:33.358082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.715 [2024-11-07 09:45:33.358089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.715 [2024-11-07 09:45:33.358251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.447 ms, result 0 00:17:05.715 true 00:17:05.973 09:45:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73675 00:17:05.973 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73675 ']' 00:17:05.973 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73675 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73675 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.974 killing process with pid 73675 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73675' 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73675 00:17:05.974 09:45:33 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73675 00:17:12.528 09:45:39 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:13.463 65536+0 records in 00:17:13.463 65536+0 records out 00:17:13.463 268435456 bytes (268 MB, 256 MiB) copied, 1.06724 s, 252 MB/s 00:17:13.463 09:45:40 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:13.463 [2024-11-07 09:45:40.930900] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:13.463 [2024-11-07 09:45:40.931030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73858 ] 00:17:13.463 [2024-11-07 09:45:41.087091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.756 [2024-11-07 09:45:41.188312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.015 [2024-11-07 09:45:41.443620] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:14.015 [2024-11-07 09:45:41.443698] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:14.015 [2024-11-07 09:45:41.597982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.598041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:14.015 [2024-11-07 09:45:41.598054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:14.015 [2024-11-07 09:45:41.598063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.600770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.600810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.015 [2024-11-07 09:45:41.600820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:17:14.015 [2024-11-07 09:45:41.600827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.600903] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:14.015 [2024-11-07 09:45:41.601623] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:14.015 [2024-11-07 09:45:41.601660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.601669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.015 [2024-11-07 09:45:41.601677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:17:14.015 [2024-11-07 09:45:41.601685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.603369] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:14.015 [2024-11-07 09:45:41.615655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.615705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:14.015 [2024-11-07 09:45:41.615719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.288 ms 00:17:14.015 [2024-11-07 09:45:41.615728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.615834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.615846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:14.015 [2024-11-07 09:45:41.615855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:14.015 [2024-11-07 09:45:41.615862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.621073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.621113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.015 [2024-11-07 09:45:41.621123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:17:14.015 [2024-11-07 09:45:41.621131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.621231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.621241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.015 [2024-11-07 09:45:41.621249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:14.015 [2024-11-07 09:45:41.621257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.621289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.621297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:14.015 [2024-11-07 09:45:41.621305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:14.015 [2024-11-07 09:45:41.621312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.621335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:14.015 [2024-11-07 09:45:41.624839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.624869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.015 [2024-11-07 09:45:41.624879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:17:14.015 [2024-11-07 09:45:41.624887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.624923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.015 [2024-11-07 09:45:41.624932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:14.015 [2024-11-07 09:45:41.624940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:14.015 [2024-11-07 09:45:41.624947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.015 [2024-11-07 09:45:41.624968] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:14.016 [2024-11-07 09:45:41.624985] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:14.016 [2024-11-07 09:45:41.625019] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:14.016 [2024-11-07 09:45:41.625034] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:14.016 [2024-11-07 09:45:41.625137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:14.016 [2024-11-07 09:45:41.625148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:14.016 [2024-11-07 09:45:41.625157] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:14.016 [2024-11-07 09:45:41.625169] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625178] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625186] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:14.016 [2024-11-07 09:45:41.625193] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:14.016 [2024-11-07 09:45:41.625201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:14.016 [2024-11-07 09:45:41.625207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:14.016 [2024-11-07 09:45:41.625215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.016 [2024-11-07 09:45:41.625222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:14.016 [2024-11-07 09:45:41.625229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:17:14.016 [2024-11-07 09:45:41.625236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.016 [2024-11-07 09:45:41.625336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.016 [2024-11-07 09:45:41.625348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:14.016 [2024-11-07 09:45:41.625357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:14.016 [2024-11-07 09:45:41.625364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.016 [2024-11-07 09:45:41.625462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:14.016 [2024-11-07 09:45:41.625472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:14.016 [2024-11-07 09:45:41.625480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:14.016 [2024-11-07 09:45:41.625501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:14.016 [2024-11-07 09:45:41.625521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.016 [2024-11-07 09:45:41.625535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:14.016 [2024-11-07 09:45:41.625541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:14.016 [2024-11-07 09:45:41.625548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.016 [2024-11-07 09:45:41.625562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:14.016 [2024-11-07 09:45:41.625568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:14.016 [2024-11-07 09:45:41.625581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:14.016 [2024-11-07 09:45:41.625594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:14.016 [2024-11-07 09:45:41.625613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:14.016 [2024-11-07 09:45:41.625645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:14.016 [2024-11-07 09:45:41.625665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:14.016 [2024-11-07 09:45:41.625685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:14.016 [2024-11-07 09:45:41.625705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.016 [2024-11-07 09:45:41.625718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:14.016 [2024-11-07 09:45:41.625724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:14.016 [2024-11-07 09:45:41.625731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.016 [2024-11-07 09:45:41.625737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:14.016 [2024-11-07 09:45:41.625744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:14.016 [2024-11-07 09:45:41.625750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:14.016 [2024-11-07 09:45:41.625763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:14.016 [2024-11-07 09:45:41.625769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:14.016 [2024-11-07 09:45:41.625783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:14.016 [2024-11-07 09:45:41.625792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.016 [2024-11-07 09:45:41.625808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:14.016 [2024-11-07 09:45:41.625815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:14.016 [2024-11-07 09:45:41.625822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:14.016 [2024-11-07 09:45:41.625829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:14.016 [2024-11-07 09:45:41.625836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:14.016 [2024-11-07 09:45:41.625842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:14.016 [2024-11-07 09:45:41.625850] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:14.016 [2024-11-07 09:45:41.625859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:14.016 [2024-11-07 09:45:41.625875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:14.016 [2024-11-07 09:45:41.625882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:14.016 [2024-11-07 09:45:41.625889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:14.016 [2024-11-07 09:45:41.625896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:14.016 [2024-11-07 09:45:41.625903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:14.016 [2024-11-07 09:45:41.625909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:14.016 [2024-11-07 09:45:41.625921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:14.016 [2024-11-07 09:45:41.625928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:14.016 [2024-11-07 09:45:41.625935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:14.016 [2024-11-07 09:45:41.625969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:14.016 [2024-11-07 09:45:41.625977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:14.016 [2024-11-07 09:45:41.625992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:14.016 [2024-11-07 09:45:41.625999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:14.016 [2024-11-07 09:45:41.626006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:14.017 [2024-11-07 09:45:41.626014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.017 [2024-11-07 09:45:41.626025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:14.017 [2024-11-07 09:45:41.626032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:17:14.017 [2024-11-07 09:45:41.626039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.017 [2024-11-07 09:45:41.652001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.017 [2024-11-07 09:45:41.652049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:14.017 [2024-11-07 09:45:41.652061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.893 ms 00:17:14.017 [2024-11-07 09:45:41.652069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.017 [2024-11-07 09:45:41.652213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.017 [2024-11-07 09:45:41.652223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:14.017 [2024-11-07 09:45:41.652232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:14.017 [2024-11-07 09:45:41.652239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.692667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.692722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:14.274 [2024-11-07 09:45:41.692739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.405 ms 00:17:14.274 [2024-11-07 09:45:41.692747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.692874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.692887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:14.274 [2024-11-07 09:45:41.692896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:14.274 [2024-11-07 09:45:41.692904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.693240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.693256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:14.274 [2024-11-07 09:45:41.693266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:14.274 [2024-11-07 09:45:41.693277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.693406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.693415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:14.274 [2024-11-07 09:45:41.693423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:14.274 [2024-11-07 09:45:41.693430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.706793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.706838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:14.274 [2024-11-07 09:45:41.706850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.343 ms 00:17:14.274 [2024-11-07 09:45:41.706858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.719633] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:14.274 [2024-11-07 09:45:41.719686] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:14.274 [2024-11-07 09:45:41.719700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.719708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:14.274 [2024-11-07 09:45:41.719719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.719 ms 00:17:14.274 [2024-11-07 09:45:41.719726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.744841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.744905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:14.274 [2024-11-07 09:45:41.744930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.999 ms 00:17:14.274 [2024-11-07 09:45:41.744939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.757567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.757617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:14.274 [2024-11-07 09:45:41.757637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.493 ms 00:17:14.274 [2024-11-07 09:45:41.757645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.769508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.769556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:14.274 [2024-11-07 09:45:41.769569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.760 ms 00:17:14.274 [2024-11-07 09:45:41.769577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.770256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.770284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:14.274 [2024-11-07 09:45:41.770294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:17:14.274 [2024-11-07 09:45:41.770302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.826305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.826361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:14.274 [2024-11-07 09:45:41.826374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.980 ms 00:17:14.274 [2024-11-07 09:45:41.826382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.837249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:14.274 [2024-11-07 09:45:41.852027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.852078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:14.274 [2024-11-07 09:45:41.852091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:17:14.274 [2024-11-07 09:45:41.852099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.274 [2024-11-07 09:45:41.852200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.274 [2024-11-07 09:45:41.852212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:14.274 [2024-11-07 09:45:41.852221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:14.274 [2024-11-07 09:45:41.852228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.275 [2024-11-07 09:45:41.852276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.275 [2024-11-07 09:45:41.852284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:14.275 [2024-11-07 09:45:41.852294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:14.275 [2024-11-07 09:45:41.852301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.275 [2024-11-07 09:45:41.852325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.275 [2024-11-07 09:45:41.852335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:14.275 [2024-11-07 09:45:41.852343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:14.275 [2024-11-07 09:45:41.852350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.275 [2024-11-07 09:45:41.852381] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:14.275 [2024-11-07 09:45:41.852390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.275 [2024-11-07 09:45:41.852398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:14.275 [2024-11-07 09:45:41.852405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:14.275 [2024-11-07 09:45:41.852412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.275 [2024-11-07 09:45:41.876681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.275 [2024-11-07 09:45:41.876738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:14.275 [2024-11-07 09:45:41.876750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.250 ms 00:17:14.275 [2024-11-07 09:45:41.876758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.275 [2024-11-07 09:45:41.876891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.275 [2024-11-07 09:45:41.876902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:14.275 [2024-11-07 09:45:41.876911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:14.275 [2024-11-07 09:45:41.876918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.275 [2024-11-07 09:45:41.878165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.275 [2024-11-07 09:45:41.881611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.894 ms, result 0 00:17:14.275 [2024-11-07 09:45:41.882343] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:14.275 [2024-11-07 09:45:41.895843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:15.646  [2024-11-07T09:45:44.249Z] Copying: 42/256 [MB] (42 MBps) [2024-11-07T09:45:45.228Z] Copying: 84/256 [MB] (42 MBps) [2024-11-07T09:45:46.157Z] Copying: 126/256 [MB] (42 MBps) [2024-11-07T09:45:47.089Z] Copying: 169/256 [MB] (42 MBps) [2024-11-07T09:45:48.022Z] Copying: 214/256 [MB] (45 MBps) [2024-11-07T09:45:48.022Z] Copying: 256/256 [MB] (average 42 MBps)[2024-11-07 09:45:47.871251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.351 [2024-11-07 09:45:47.880203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.880245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.351 [2024-11-07 09:45:47.880259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.351 [2024-11-07 09:45:47.880267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.880300] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.351 [2024-11-07 09:45:47.882867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.882897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.351 [2024-11-07 09:45:47.882907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:17:20.351 [2024-11-07 09:45:47.882916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.884645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.884676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.351 [2024-11-07 09:45:47.884686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:17:20.351 [2024-11-07 09:45:47.884694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.891518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.891550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.351 [2024-11-07 09:45:47.891565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:17:20.351 [2024-11-07 09:45:47.891573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.898507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.898534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.351 [2024-11-07 09:45:47.898544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:17:20.351 [2024-11-07 09:45:47.898553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.922580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.922623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.351 [2024-11-07 09:45:47.922644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.981 ms 00:17:20.351 [2024-11-07 09:45:47.922653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.936869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.936912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.351 [2024-11-07 09:45:47.936925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.174 ms 00:17:20.351 [2024-11-07 09:45:47.936936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.937084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.937094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.351 [2024-11-07 09:45:47.937104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:20.351 [2024-11-07 09:45:47.937111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.960164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.960203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.351 [2024-11-07 09:45:47.960216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.036 ms 00:17:20.351 [2024-11-07 09:45:47.960224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:47.982414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:47.982454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.351 [2024-11-07 09:45:47.982466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.148 ms 00:17:20.351 [2024-11-07 09:45:47.982474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.351 [2024-11-07 09:45:48.004858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.351 [2024-11-07 09:45:48.004898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.351 [2024-11-07 09:45:48.004910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.340 ms 00:17:20.351 [2024-11-07 09:45:48.004919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.610 [2024-11-07 09:45:48.026875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.610 [2024-11-07 09:45:48.026917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.610 [2024-11-07 09:45:48.026929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.883 ms 00:17:20.610 [2024-11-07 09:45:48.026937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.610 [2024-11-07 09:45:48.026979] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.611 [2024-11-07 09:45:48.026994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.611 [2024-11-07 09:45:48.027672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.612 [2024-11-07 09:45:48.027789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.612 [2024-11-07 09:45:48.027797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:20.612 [2024-11-07 09:45:48.027805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.612 [2024-11-07 09:45:48.027812] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.612 [2024-11-07 09:45:48.027819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.612 [2024-11-07 09:45:48.027827] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.612 [2024-11-07 09:45:48.027834] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.612 [2024-11-07 09:45:48.027842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.612 [2024-11-07 09:45:48.027849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.612 [2024-11-07 09:45:48.027855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.612 [2024-11-07 09:45:48.027862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.612 [2024-11-07 09:45:48.027868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.612 [2024-11-07 09:45:48.027878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.612 [2024-11-07 09:45:48.027887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:17:20.612 [2024-11-07 09:45:48.027894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.040066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.612 [2024-11-07 09:45:48.040107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.612 [2024-11-07 09:45:48.040118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.134 ms 00:17:20.612 [2024-11-07 09:45:48.040126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.040490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.612 [2024-11-07 09:45:48.040509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.612 [2024-11-07 09:45:48.040518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:20.612 [2024-11-07 09:45:48.040526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.075072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.075124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.612 [2024-11-07 09:45:48.075135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.075143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.075242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.075252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.612 [2024-11-07 09:45:48.075260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.075267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.075311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.075320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.612 [2024-11-07 09:45:48.075328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.075336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.075353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.075364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.612 [2024-11-07 09:45:48.075371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.075379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.150615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.150684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.612 [2024-11-07 09:45:48.150695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.150703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.212752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.212808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.612 [2024-11-07 09:45:48.212819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.212827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.212879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.212888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.612 [2024-11-07 09:45:48.212896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.212903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.212931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.212939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.612 [2024-11-07 09:45:48.212949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.212957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.213048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.213058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.612 [2024-11-07 09:45:48.213066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.213074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.213106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.213114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.612 [2024-11-07 09:45:48.213122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.213132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.213167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.213175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.612 [2024-11-07 09:45:48.213183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.213190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.213228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.612 [2024-11-07 09:45:48.213237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.612 [2024-11-07 09:45:48.213247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.612 [2024-11-07 09:45:48.213254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.612 [2024-11-07 09:45:48.213380] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.173 ms, result 0 00:17:21.582 00:17:21.582 00:17:21.582 09:45:49 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73950 00:17:21.582 09:45:49 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73950 00:17:21.582 09:45:49 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:21.582 09:45:49 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73950 ']' 00:17:21.582 09:45:49 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.582 09:45:49 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:21.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.582 09:45:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.582 09:45:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:21.582 09:45:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:21.582 [2024-11-07 09:45:49.248233] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:21.582 [2024-11-07 09:45:49.248358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73950 ] 00:17:21.840 [2024-11-07 09:45:49.400793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.840 [2024-11-07 09:45:49.498005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.409 09:45:50 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:22.409 09:45:50 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:22.673 09:45:50 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:22.673 [2024-11-07 09:45:50.280331] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.673 [2024-11-07 09:45:50.280393] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:22.933 [2024-11-07 09:45:50.451460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.933 [2024-11-07 09:45:50.451510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:22.933 [2024-11-07 09:45:50.451525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.933 [2024-11-07 09:45:50.451535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.933 [2024-11-07 09:45:50.454238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.933 [2024-11-07 09:45:50.454273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.933 [2024-11-07 09:45:50.454285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:17:22.952 [2024-11-07 09:45:50.454292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.454432] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:22.952 [2024-11-07 09:45:50.455108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:22.952 [2024-11-07 09:45:50.455135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.952 [2024-11-07 09:45:50.455143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.952 [2024-11-07 09:45:50.455154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:17:22.952 [2024-11-07 09:45:50.455161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.456248] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:22.952 [2024-11-07 09:45:50.468326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.952 [2024-11-07 09:45:50.468366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:22.952 [2024-11-07 09:45:50.468379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.082 ms 00:17:22.952 [2024-11-07 09:45:50.468389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.468470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.952 [2024-11-07 09:45:50.468483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:22.952 [2024-11-07 09:45:50.468492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:22.952 [2024-11-07 09:45:50.468501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.473087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.952 [2024-11-07 09:45:50.473123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.952 [2024-11-07 09:45:50.473132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.538 ms 00:17:22.952 [2024-11-07 09:45:50.473142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.473234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.952 [2024-11-07 09:45:50.473246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.952 [2024-11-07 09:45:50.473254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:22.952 [2024-11-07 09:45:50.473264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.473293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.952 [2024-11-07 09:45:50.473303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:22.952 [2024-11-07 09:45:50.473310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:22.952 [2024-11-07 09:45:50.473319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.952 [2024-11-07 09:45:50.473341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:22.952 [2024-11-07 09:45:50.476648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.953 [2024-11-07 09:45:50.476674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.953 [2024-11-07 09:45:50.476685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:17:22.953 [2024-11-07 09:45:50.476692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.953 [2024-11-07 09:45:50.476732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.953 [2024-11-07 09:45:50.476740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:22.953 [2024-11-07 09:45:50.476750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:22.953 [2024-11-07 09:45:50.476759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.953 [2024-11-07 09:45:50.476781] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:22.953 [2024-11-07 09:45:50.476797] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:22.953 [2024-11-07 09:45:50.476836] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:22.953 [2024-11-07 09:45:50.476850] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:22.953 [2024-11-07 09:45:50.476954] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:22.953 [2024-11-07 09:45:50.476964] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:22.953 [2024-11-07 09:45:50.476978] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:22.953 [2024-11-07 09:45:50.476989] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:22.953 [2024-11-07 09:45:50.476999] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:22.953 [2024-11-07 09:45:50.477016] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:22.953 [2024-11-07 09:45:50.477022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:22.953 [2024-11-07 09:45:50.477032] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:22.953 [2024-11-07 09:45:50.477039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.953 [2024-11-07 09:45:50.477048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:22.953 [2024-11-07 09:45:50.477056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:22.953 [2024-11-07 09:45:50.477064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.953 [2024-11-07 09:45:50.477151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.953 [2024-11-07 09:45:50.477161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:22.953 [2024-11-07 09:45:50.477168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:22.953 [2024-11-07 09:45:50.477176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.953 [2024-11-07 09:45:50.477289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:22.953 [2024-11-07 09:45:50.477301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:22.953 [2024-11-07 09:45:50.477309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:22.953 [2024-11-07 09:45:50.477334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:22.953 [2024-11-07 09:45:50.477358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.953 [2024-11-07 09:45:50.477373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:22.953 [2024-11-07 09:45:50.477381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:22.953 [2024-11-07 09:45:50.477387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:22.953 [2024-11-07 09:45:50.477396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:22.953 [2024-11-07 09:45:50.477402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:22.953 [2024-11-07 09:45:50.477410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:22.953 [2024-11-07 09:45:50.477425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:22.953 [2024-11-07 09:45:50.477450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:22.953 [2024-11-07 09:45:50.477476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:22.953 [2024-11-07 09:45:50.477497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:22.953 [2024-11-07 09:45:50.477519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:22.953 [2024-11-07 09:45:50.477541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.953 [2024-11-07 09:45:50.477555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:22.953 [2024-11-07 09:45:50.477563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:22.953 [2024-11-07 09:45:50.477569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:22.953 [2024-11-07 09:45:50.477577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:22.953 [2024-11-07 09:45:50.477584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:22.953 [2024-11-07 09:45:50.477593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:22.953 [2024-11-07 09:45:50.477608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:22.953 [2024-11-07 09:45:50.477614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477622] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:22.953 [2024-11-07 09:45:50.477648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:22.953 [2024-11-07 09:45:50.477660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:22.953 [2024-11-07 09:45:50.477676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:22.953 [2024-11-07 09:45:50.477683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:22.953 [2024-11-07 09:45:50.477691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:22.953 [2024-11-07 09:45:50.477698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:22.953 [2024-11-07 09:45:50.477706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:22.953 [2024-11-07 09:45:50.477712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:22.953 [2024-11-07 09:45:50.477722] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:22.953 [2024-11-07 09:45:50.477731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.953 [2024-11-07 09:45:50.477743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:22.953 [2024-11-07 09:45:50.477751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:22.953 [2024-11-07 09:45:50.477761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:22.953 [2024-11-07 09:45:50.477768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:22.953 [2024-11-07 09:45:50.477777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:22.954 [2024-11-07 09:45:50.477784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:22.954 [2024-11-07 09:45:50.477793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:22.954 [2024-11-07 09:45:50.477800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:22.954 [2024-11-07 09:45:50.477808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:22.954 [2024-11-07 09:45:50.477815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:22.954 [2024-11-07 09:45:50.477824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:22.954 [2024-11-07 09:45:50.477831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:22.954 [2024-11-07 09:45:50.477839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:22.954 [2024-11-07 09:45:50.477846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:22.954 [2024-11-07 09:45:50.477854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:22.954 [2024-11-07 09:45:50.477862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:22.954 [2024-11-07 09:45:50.477873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:22.954 [2024-11-07 09:45:50.477880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:22.954 [2024-11-07 09:45:50.477889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:22.954 [2024-11-07 09:45:50.477896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:22.954 [2024-11-07 09:45:50.477904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.477911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:22.954 [2024-11-07 09:45:50.477920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:17:22.954 [2024-11-07 09:45:50.477927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.503466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.503502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.954 [2024-11-07 09:45:50.503514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.481 ms 00:17:22.954 [2024-11-07 09:45:50.503521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.503655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.503666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.954 [2024-11-07 09:45:50.503675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:22.954 [2024-11-07 09:45:50.503683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.533675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.533708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.954 [2024-11-07 09:45:50.533723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.969 ms 00:17:22.954 [2024-11-07 09:45:50.533730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.533789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.533799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.954 [2024-11-07 09:45:50.533809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:22.954 [2024-11-07 09:45:50.533816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.534129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.534141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.954 [2024-11-07 09:45:50.534151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:22.954 [2024-11-07 09:45:50.534161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.534284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.534299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.954 [2024-11-07 09:45:50.534308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:22.954 [2024-11-07 09:45:50.534316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.548501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.548531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.954 [2024-11-07 09:45:50.548543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.162 ms 00:17:22.954 [2024-11-07 09:45:50.548550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.560905] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:22.954 [2024-11-07 09:45:50.560939] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:22.954 [2024-11-07 09:45:50.560952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.560960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:22.954 [2024-11-07 09:45:50.560970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.281 ms 00:17:22.954 [2024-11-07 09:45:50.560978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.585172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.585210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:22.954 [2024-11-07 09:45:50.585223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.122 ms 00:17:22.954 [2024-11-07 09:45:50.585231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.954 [2024-11-07 09:45:50.596572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.954 [2024-11-07 09:45:50.596603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:22.954 [2024-11-07 09:45:50.596616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.277 ms 00:17:22.954 [2024-11-07 09:45:50.596623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.607706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.607737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:23.214 [2024-11-07 09:45:50.607749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.009 ms 00:17:23.214 [2024-11-07 09:45:50.607756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.608367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.608392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:23.214 [2024-11-07 09:45:50.608402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:17:23.214 [2024-11-07 09:45:50.608409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.670783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.670844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:23.214 [2024-11-07 09:45:50.670861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.347 ms 00:17:23.214 [2024-11-07 09:45:50.670870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.681161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:23.214 [2024-11-07 09:45:50.695010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.695053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:23.214 [2024-11-07 09:45:50.695068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.029 ms 00:17:23.214 [2024-11-07 09:45:50.695077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.695160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.695172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:23.214 [2024-11-07 09:45:50.695181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.214 [2024-11-07 09:45:50.695189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.695242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.695253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:23.214 [2024-11-07 09:45:50.695261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:23.214 [2024-11-07 09:45:50.695270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.695295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.695304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:23.214 [2024-11-07 09:45:50.695312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.214 [2024-11-07 09:45:50.695322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.695350] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:23.214 [2024-11-07 09:45:50.695362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.695369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:23.214 [2024-11-07 09:45:50.695380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.214 [2024-11-07 09:45:50.695387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.717873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.717911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:23.214 [2024-11-07 09:45:50.717926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.461 ms 00:17:23.214 [2024-11-07 09:45:50.717935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.718027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.214 [2024-11-07 09:45:50.718038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:23.214 [2024-11-07 09:45:50.718048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:23.214 [2024-11-07 09:45:50.718057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.214 [2024-11-07 09:45:50.718821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.214 [2024-11-07 09:45:50.721872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.079 ms, result 0 00:17:23.214 [2024-11-07 09:45:50.722676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.214 Some configs were skipped because the RPC state that can call them passed over. 00:17:23.214 09:45:50 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:23.472 [2024-11-07 09:45:50.909118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.472 [2024-11-07 09:45:50.909170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:23.472 [2024-11-07 09:45:50.909182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:17:23.472 [2024-11-07 09:45:50.909192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.472 [2024-11-07 09:45:50.909225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.835 ms, result 0 00:17:23.472 true 00:17:23.472 09:45:50 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:23.472 [2024-11-07 09:45:51.108462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.472 [2024-11-07 09:45:51.108508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:23.472 [2024-11-07 09:45:51.108521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:17:23.472 [2024-11-07 09:45:51.108528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.472 [2024-11-07 09:45:51.108564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 0.930 ms, result 0 00:17:23.472 true 00:17:23.472 09:45:51 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73950 00:17:23.472 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73950 ']' 00:17:23.472 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73950 00:17:23.472 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:23.472 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.472 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73950 00:17:23.730 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:23.730 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:23.730 killing process with pid 73950 00:17:23.730 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73950' 00:17:23.730 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73950 00:17:23.730 09:45:51 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73950 00:17:24.297 [2024-11-07 09:45:51.835964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.836018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:24.297 [2024-11-07 09:45:51.836030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.297 [2024-11-07 09:45:51.836039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.836061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:24.297 [2024-11-07 09:45:51.838680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.838711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:24.297 [2024-11-07 09:45:51.838724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:17:24.297 [2024-11-07 09:45:51.838732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.839010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.839019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:24.297 [2024-11-07 09:45:51.839028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:24.297 [2024-11-07 09:45:51.839036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.843070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.843099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:24.297 [2024-11-07 09:45:51.843112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:17:24.297 [2024-11-07 09:45:51.843119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.850022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.850049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:24.297 [2024-11-07 09:45:51.850060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:17:24.297 [2024-11-07 09:45:51.850068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.859131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.859160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:24.297 [2024-11-07 09:45:51.859173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.011 ms 00:17:24.297 [2024-11-07 09:45:51.859186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.866443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.866474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:24.297 [2024-11-07 09:45:51.866487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.220 ms 00:17:24.297 [2024-11-07 09:45:51.866495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.866615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.866625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:24.297 [2024-11-07 09:45:51.866649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:24.297 [2024-11-07 09:45:51.866656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.875881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.875909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:24.297 [2024-11-07 09:45:51.875919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.191 ms 00:17:24.297 [2024-11-07 09:45:51.875926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.884647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.884675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:24.297 [2024-11-07 09:45:51.884688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.684 ms 00:17:24.297 [2024-11-07 09:45:51.884695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.297 [2024-11-07 09:45:51.893168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.297 [2024-11-07 09:45:51.893206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:24.298 [2024-11-07 09:45:51.893218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.383 ms 00:17:24.298 [2024-11-07 09:45:51.893224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.298 [2024-11-07 09:45:51.901870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.298 [2024-11-07 09:45:51.901901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:24.298 [2024-11-07 09:45:51.901913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.584 ms 00:17:24.298 [2024-11-07 09:45:51.901921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.298 [2024-11-07 09:45:51.901966] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:24.298 [2024-11-07 09:45:51.901981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.901993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:24.298 [2024-11-07 09:45:51.902682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:24.299 [2024-11-07 09:45:51.902838] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:24.299 [2024-11-07 09:45:51.902851] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:24.299 [2024-11-07 09:45:51.902864] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:24.299 [2024-11-07 09:45:51.902875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:24.299 [2024-11-07 09:45:51.902882] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:24.299 [2024-11-07 09:45:51.902891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:24.299 [2024-11-07 09:45:51.902898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:24.299 [2024-11-07 09:45:51.902906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:24.299 [2024-11-07 09:45:51.902913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:24.299 [2024-11-07 09:45:51.902921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:24.299 [2024-11-07 09:45:51.902927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:24.299 [2024-11-07 09:45:51.902935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.299 [2024-11-07 09:45:51.902942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:24.299 [2024-11-07 09:45:51.902951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:17:24.299 [2024-11-07 09:45:51.902958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.299 [2024-11-07 09:45:51.915285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.299 [2024-11-07 09:45:51.915314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:24.299 [2024-11-07 09:45:51.915329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.302 ms 00:17:24.299 [2024-11-07 09:45:51.915338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.299 [2024-11-07 09:45:51.915726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.299 [2024-11-07 09:45:51.915744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:24.299 [2024-11-07 09:45:51.915753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:17:24.299 [2024-11-07 09:45:51.915763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.299 [2024-11-07 09:45:51.959014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.299 [2024-11-07 09:45:51.959059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.299 [2024-11-07 09:45:51.959072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.299 [2024-11-07 09:45:51.959079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.299 [2024-11-07 09:45:51.959195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.299 [2024-11-07 09:45:51.959204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.299 [2024-11-07 09:45:51.959214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.299 [2024-11-07 09:45:51.959223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.299 [2024-11-07 09:45:51.959282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.299 [2024-11-07 09:45:51.959292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.299 [2024-11-07 09:45:51.959303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.299 [2024-11-07 09:45:51.959309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.299 [2024-11-07 09:45:51.959329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.299 [2024-11-07 09:45:51.959336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.299 [2024-11-07 09:45:51.959345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.299 [2024-11-07 09:45:51.959352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.034443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.034490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.602 [2024-11-07 09:45:52.034503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.034510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.083491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.083531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.602 [2024-11-07 09:45:52.083542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.083550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.084558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.602 [2024-11-07 09:45:52.084569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.084574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.084609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.602 [2024-11-07 09:45:52.084616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.084623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.084723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.602 [2024-11-07 09:45:52.084730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.084736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.084767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:24.602 [2024-11-07 09:45:52.084774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.084780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.084817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.602 [2024-11-07 09:45:52.084826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.084831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:24.602 [2024-11-07 09:45:52.084873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.602 [2024-11-07 09:45:52.084880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:24.602 [2024-11-07 09:45:52.084886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.602 [2024-11-07 09:45:52.084991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 249.014 ms, result 0 00:17:25.167 09:45:52 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:25.167 09:45:52 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.167 [2024-11-07 09:45:52.664515] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:25.167 [2024-11-07 09:45:52.664645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74003 ] 00:17:25.167 [2024-11-07 09:45:52.820110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.425 [2024-11-07 09:45:52.895696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.684 [2024-11-07 09:45:53.101018] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.684 [2024-11-07 09:45:53.101069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.684 [2024-11-07 09:45:53.248740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.248779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.684 [2024-11-07 09:45:53.248790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.684 [2024-11-07 09:45:53.248796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.250859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.250999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.684 [2024-11-07 09:45:53.251012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:17:25.684 [2024-11-07 09:45:53.251018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.251072] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.684 [2024-11-07 09:45:53.251710] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.684 [2024-11-07 09:45:53.251735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.251742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.684 [2024-11-07 09:45:53.251748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:17:25.684 [2024-11-07 09:45:53.251754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.252713] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.684 [2024-11-07 09:45:53.262078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.262190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.684 [2024-11-07 09:45:53.262203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.365 ms 00:17:25.684 [2024-11-07 09:45:53.262210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.262270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.262278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.684 [2024-11-07 09:45:53.262285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:25.684 [2024-11-07 09:45:53.262290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.266625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.266721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.684 [2024-11-07 09:45:53.266767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:17:25.684 [2024-11-07 09:45:53.266785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.266869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.266966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.684 [2024-11-07 09:45:53.267030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:25.684 [2024-11-07 09:45:53.267045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.267072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.267091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.684 [2024-11-07 09:45:53.267105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.684 [2024-11-07 09:45:53.267120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.267144] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:25.684 [2024-11-07 09:45:53.269832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.269911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.684 [2024-11-07 09:45:53.269973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:17:25.684 [2024-11-07 09:45:53.269991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.270028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.684 [2024-11-07 09:45:53.270045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.684 [2024-11-07 09:45:53.270086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:25.684 [2024-11-07 09:45:53.270103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.684 [2024-11-07 09:45:53.270128] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.685 [2024-11-07 09:45:53.270156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:25.685 [2024-11-07 09:45:53.270221] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.685 [2024-11-07 09:45:53.270275] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:25.685 [2024-11-07 09:45:53.270388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:25.685 [2024-11-07 09:45:53.270437] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.685 [2024-11-07 09:45:53.270488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:25.685 [2024-11-07 09:45:53.270514] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.685 [2024-11-07 09:45:53.270540] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.685 [2024-11-07 09:45:53.270562] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:25.685 [2024-11-07 09:45:53.270618] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.685 [2024-11-07 09:45:53.270663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:25.685 [2024-11-07 09:45:53.270678] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:25.685 [2024-11-07 09:45:53.270693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.685 [2024-11-07 09:45:53.270708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.685 [2024-11-07 09:45:53.270723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:17:25.685 [2024-11-07 09:45:53.270737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.685 [2024-11-07 09:45:53.270816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.685 [2024-11-07 09:45:53.270833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.685 [2024-11-07 09:45:53.270850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:25.685 [2024-11-07 09:45:53.270864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.685 [2024-11-07 09:45:53.270958] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.685 [2024-11-07 09:45:53.270993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.685 [2024-11-07 09:45:53.271009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.685 [2024-11-07 09:45:53.271083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.685 [2024-11-07 09:45:53.271126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.685 [2024-11-07 09:45:53.271185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.685 [2024-11-07 09:45:53.271199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:25.685 [2024-11-07 09:45:53.271213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.685 [2024-11-07 09:45:53.271241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.685 [2024-11-07 09:45:53.271280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:25.685 [2024-11-07 09:45:53.271297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.685 [2024-11-07 09:45:53.271325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.685 [2024-11-07 09:45:53.271394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.685 [2024-11-07 09:45:53.271456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.685 [2024-11-07 09:45:53.271499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.685 [2024-11-07 09:45:53.271567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.685 [2024-11-07 09:45:53.271639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.685 [2024-11-07 09:45:53.271671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.685 [2024-11-07 09:45:53.271685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:25.685 [2024-11-07 09:45:53.271699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.685 [2024-11-07 09:45:53.271782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:25.685 [2024-11-07 09:45:53.271799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:25.685 [2024-11-07 09:45:53.271813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:25.685 [2024-11-07 09:45:53.271841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:25.685 [2024-11-07 09:45:53.271854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271867] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.685 [2024-11-07 09:45:53.271882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.685 [2024-11-07 09:45:53.271955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.685 [2024-11-07 09:45:53.271975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.685 [2024-11-07 09:45:53.271991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.685 [2024-11-07 09:45:53.272005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.685 [2024-11-07 09:45:53.272020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.685 [2024-11-07 09:45:53.272033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.685 [2024-11-07 09:45:53.272047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.685 [2024-11-07 09:45:53.272061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.685 [2024-11-07 09:45:53.272076] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.685 [2024-11-07 09:45:53.272141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:25.685 [2024-11-07 09:45:53.272188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:25.685 [2024-11-07 09:45:53.272209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:25.685 [2024-11-07 09:45:53.272231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:25.685 [2024-11-07 09:45:53.272281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:25.685 [2024-11-07 09:45:53.272305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:25.685 [2024-11-07 09:45:53.272326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:25.685 [2024-11-07 09:45:53.272347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:25.685 [2024-11-07 09:45:53.272368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:25.685 [2024-11-07 09:45:53.272415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:25.685 [2024-11-07 09:45:53.272544] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.685 [2024-11-07 09:45:53.272568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.685 [2024-11-07 09:45:53.272612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.685 [2024-11-07 09:45:53.272649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.685 [2024-11-07 09:45:53.272695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.685 [2024-11-07 09:45:53.272720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.685 [2024-11-07 09:45:53.272735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.685 [2024-11-07 09:45:53.272754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:17:25.686 [2024-11-07 09:45:53.272770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.686 [2024-11-07 09:45:53.293688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.686 [2024-11-07 09:45:53.293778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.686 [2024-11-07 09:45:53.293819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.857 ms 00:17:25.686 [2024-11-07 09:45:53.293836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.686 [2024-11-07 09:45:53.293940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.686 [2024-11-07 09:45:53.293964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.686 [2024-11-07 09:45:53.294000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:25.686 [2024-11-07 09:45:53.294017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.686 [2024-11-07 09:45:53.343856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.686 [2024-11-07 09:45:53.343952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.686 [2024-11-07 09:45:53.343995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.813 ms 00:17:25.686 [2024-11-07 09:45:53.344017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.686 [2024-11-07 09:45:53.344107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.686 [2024-11-07 09:45:53.344129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.686 [2024-11-07 09:45:53.344146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.686 [2024-11-07 09:45:53.344160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.686 [2024-11-07 09:45:53.344455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.686 [2024-11-07 09:45:53.344485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.686 [2024-11-07 09:45:53.344501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:25.686 [2024-11-07 09:45:53.344520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.686 [2024-11-07 09:45:53.344684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.686 [2024-11-07 09:45:53.344706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.686 [2024-11-07 09:45:53.344843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:25.686 [2024-11-07 09:45:53.344866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.944 [2024-11-07 09:45:53.365511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.944 [2024-11-07 09:45:53.365644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.944 [2024-11-07 09:45:53.365704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.614 ms 00:17:25.944 [2024-11-07 09:45:53.365728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.944 [2024-11-07 09:45:53.377893] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:25.944 [2024-11-07 09:45:53.378014] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.944 [2024-11-07 09:45:53.378079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.944 [2024-11-07 09:45:53.378099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.944 [2024-11-07 09:45:53.378120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.231 ms 00:17:25.945 [2024-11-07 09:45:53.378138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.402475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.402585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:25.945 [2024-11-07 09:45:53.402643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.234 ms 00:17:25.945 [2024-11-07 09:45:53.402666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.414537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.414683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:25.945 [2024-11-07 09:45:53.414742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.581 ms 00:17:25.945 [2024-11-07 09:45:53.414765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.426001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.426110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:25.945 [2024-11-07 09:45:53.426159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.126 ms 00:17:25.945 [2024-11-07 09:45:53.426181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.426812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.426834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.945 [2024-11-07 09:45:53.426843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:17:25.945 [2024-11-07 09:45:53.426851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.481044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.481083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:25.945 [2024-11-07 09:45:53.481096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.168 ms 00:17:25.945 [2024-11-07 09:45:53.481103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.491311] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:25.945 [2024-11-07 09:45:53.504851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.504881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.945 [2024-11-07 09:45:53.504895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.647 ms 00:17:25.945 [2024-11-07 09:45:53.504903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.504984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.504995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:25.945 [2024-11-07 09:45:53.505004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:25.945 [2024-11-07 09:45:53.505012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.505056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.505065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.945 [2024-11-07 09:45:53.505073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:25.945 [2024-11-07 09:45:53.505081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.505108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.505118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.945 [2024-11-07 09:45:53.505126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.945 [2024-11-07 09:45:53.505133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.505161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:25.945 [2024-11-07 09:45:53.505170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.505178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:25.945 [2024-11-07 09:45:53.505185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:25.945 [2024-11-07 09:45:53.505192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.528090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.528120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.945 [2024-11-07 09:45:53.528131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.876 ms 00:17:25.945 [2024-11-07 09:45:53.528139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.528224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.945 [2024-11-07 09:45:53.528234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.945 [2024-11-07 09:45:53.528243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:25.945 [2024-11-07 09:45:53.528250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.945 [2024-11-07 09:45:53.528972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.945 [2024-11-07 09:45:53.532085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.964 ms, result 0 00:17:25.945 [2024-11-07 09:45:53.532770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.945 [2024-11-07 09:45:53.545774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.319  [2024-11-07T09:45:55.556Z] Copying: 45/256 [MB] (45 MBps) [2024-11-07T09:45:56.935Z] Copying: 87/256 [MB] (41 MBps) [2024-11-07T09:45:57.872Z] Copying: 119/256 [MB] (31 MBps) [2024-11-07T09:45:58.805Z] Copying: 157/256 [MB] (38 MBps) [2024-11-07T09:45:59.741Z] Copying: 195/256 [MB] (38 MBps) [2024-11-07T09:45:59.999Z] Copying: 239/256 [MB] (44 MBps) [2024-11-07T09:45:59.999Z] Copying: 256/256 [MB] (average 40 MBps)[2024-11-07 09:45:59.922249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.328 [2024-11-07 09:45:59.931384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.328 [2024-11-07 09:45:59.931528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.328 [2024-11-07 09:45:59.931600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.328 [2024-11-07 09:45:59.931640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.328 [2024-11-07 09:45:59.931712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:32.328 [2024-11-07 09:45:59.934308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.328 [2024-11-07 09:45:59.934407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.328 [2024-11-07 09:45:59.934463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:17:32.328 [2024-11-07 09:45:59.934485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.328 [2024-11-07 09:45:59.934903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.328 [2024-11-07 09:45:59.934990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.328 [2024-11-07 09:45:59.935043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:32.328 [2024-11-07 09:45:59.935065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.328 [2024-11-07 09:45:59.938766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.328 [2024-11-07 09:45:59.938846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.328 [2024-11-07 09:45:59.938896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:17:32.328 [2024-11-07 09:45:59.938917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.329 [2024-11-07 09:45:59.945831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.329 [2024-11-07 09:45:59.945918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:32.329 [2024-11-07 09:45:59.945967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:17:32.329 [2024-11-07 09:45:59.945988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.329 [2024-11-07 09:45:59.968405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.329 [2024-11-07 09:45:59.968516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.329 [2024-11-07 09:45:59.968566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.355 ms 00:17:32.329 [2024-11-07 09:45:59.968587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.329 [2024-11-07 09:45:59.982483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.329 [2024-11-07 09:45:59.982599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.329 [2024-11-07 09:45:59.982674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.804 ms 00:17:32.329 [2024-11-07 09:45:59.982703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.329 [2024-11-07 09:45:59.982862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.329 [2024-11-07 09:45:59.982888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.329 [2024-11-07 09:45:59.982908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:32.329 [2024-11-07 09:45:59.982956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.588 [2024-11-07 09:46:00.005843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.588 [2024-11-07 09:46:00.005948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.588 [2024-11-07 09:46:00.006032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.849 ms 00:17:32.588 [2024-11-07 09:46:00.006055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.588 [2024-11-07 09:46:00.028668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.588 [2024-11-07 09:46:00.028772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.588 [2024-11-07 09:46:00.028899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.570 ms 00:17:32.588 [2024-11-07 09:46:00.028921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.588 [2024-11-07 09:46:00.051146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.588 [2024-11-07 09:46:00.051254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.588 [2024-11-07 09:46:00.051302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.185 ms 00:17:32.588 [2024-11-07 09:46:00.051324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.588 [2024-11-07 09:46:00.073710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.588 [2024-11-07 09:46:00.073810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.588 [2024-11-07 09:46:00.073858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.249 ms 00:17:32.588 [2024-11-07 09:46:00.073867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.588 [2024-11-07 09:46:00.073896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.588 [2024-11-07 09:46:00.073910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.073996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.074004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.074012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.074020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.074027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.588 [2024-11-07 09:46:00.074034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.589 [2024-11-07 09:46:00.074691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.589 [2024-11-07 09:46:00.074699] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:32.589 [2024-11-07 09:46:00.074707] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.589 [2024-11-07 09:46:00.074714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.589 [2024-11-07 09:46:00.074721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.589 [2024-11-07 09:46:00.074728] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.590 [2024-11-07 09:46:00.074735] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.590 [2024-11-07 09:46:00.074742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.590 [2024-11-07 09:46:00.074749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.590 [2024-11-07 09:46:00.074755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.590 [2024-11-07 09:46:00.074761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.590 [2024-11-07 09:46:00.074768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.590 [2024-11-07 09:46:00.074778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.590 [2024-11-07 09:46:00.074786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:17:32.590 [2024-11-07 09:46:00.074794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.086923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.590 [2024-11-07 09:46:00.086953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.590 [2024-11-07 09:46:00.086964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.099 ms 00:17:32.590 [2024-11-07 09:46:00.086972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.087367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.590 [2024-11-07 09:46:00.087387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.590 [2024-11-07 09:46:00.087396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:17:32.590 [2024-11-07 09:46:00.087403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.122189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.590 [2024-11-07 09:46:00.122232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.590 [2024-11-07 09:46:00.122242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.590 [2024-11-07 09:46:00.122249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.122325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.590 [2024-11-07 09:46:00.122333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.590 [2024-11-07 09:46:00.122341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.590 [2024-11-07 09:46:00.122349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.122393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.590 [2024-11-07 09:46:00.122402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.590 [2024-11-07 09:46:00.122410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.590 [2024-11-07 09:46:00.122418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.122434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.590 [2024-11-07 09:46:00.122446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.590 [2024-11-07 09:46:00.122453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.590 [2024-11-07 09:46:00.122460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.590 [2024-11-07 09:46:00.199361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.590 [2024-11-07 09:46:00.199399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.590 [2024-11-07 09:46:00.199409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.590 [2024-11-07 09:46:00.199416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.848 [2024-11-07 09:46:00.262431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.848 [2024-11-07 09:46:00.262477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.848 [2024-11-07 09:46:00.262487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.848 [2024-11-07 09:46:00.262495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.848 [2024-11-07 09:46:00.262543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.848 [2024-11-07 09:46:00.262552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.848 [2024-11-07 09:46:00.262560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.848 [2024-11-07 09:46:00.262568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.848 [2024-11-07 09:46:00.262596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.848 [2024-11-07 09:46:00.262604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.849 [2024-11-07 09:46:00.262615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.849 [2024-11-07 09:46:00.262622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.849 [2024-11-07 09:46:00.262724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.849 [2024-11-07 09:46:00.262734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.849 [2024-11-07 09:46:00.262741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.849 [2024-11-07 09:46:00.262748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.849 [2024-11-07 09:46:00.262777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.849 [2024-11-07 09:46:00.262786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.849 [2024-11-07 09:46:00.262794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.849 [2024-11-07 09:46:00.262804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.849 [2024-11-07 09:46:00.262838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.849 [2024-11-07 09:46:00.262846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.849 [2024-11-07 09:46:00.262853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.849 [2024-11-07 09:46:00.262860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.849 [2024-11-07 09:46:00.262901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.849 [2024-11-07 09:46:00.262910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.849 [2024-11-07 09:46:00.262920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.849 [2024-11-07 09:46:00.262927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.849 [2024-11-07 09:46:00.263053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.660 ms, result 0 00:17:33.415 00:17:33.415 00:17:33.415 09:46:00 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:33.415 09:46:00 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:33.981 09:46:01 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:33.981 [2024-11-07 09:46:01.532329] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:33.981 [2024-11-07 09:46:01.532446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74102 ] 00:17:34.239 [2024-11-07 09:46:01.692907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.239 [2024-11-07 09:46:01.786099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.498 [2024-11-07 09:46:02.034660] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.498 [2024-11-07 09:46:02.034716] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.757 [2024-11-07 09:46:02.188828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.188994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.757 [2024-11-07 09:46:02.189013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.757 [2024-11-07 09:46:02.189022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.191620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.191667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.757 [2024-11-07 09:46:02.191677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:17:34.757 [2024-11-07 09:46:02.191685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.191754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.757 [2024-11-07 09:46:02.192418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.757 [2024-11-07 09:46:02.192443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.192451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.757 [2024-11-07 09:46:02.192460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:17:34.757 [2024-11-07 09:46:02.192467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.193603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:34.757 [2024-11-07 09:46:02.205906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.205951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:34.757 [2024-11-07 09:46:02.205962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.304 ms 00:17:34.757 [2024-11-07 09:46:02.205970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.206052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.206063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:34.757 [2024-11-07 09:46:02.206072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:34.757 [2024-11-07 09:46:02.206080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.210800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.210929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.757 [2024-11-07 09:46:02.210943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:17:34.757 [2024-11-07 09:46:02.210951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.211037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.211047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.757 [2024-11-07 09:46:02.211054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:34.757 [2024-11-07 09:46:02.211061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.211083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.211093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.757 [2024-11-07 09:46:02.211101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:34.757 [2024-11-07 09:46:02.211108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.211126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.757 [2024-11-07 09:46:02.214451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.214566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.757 [2024-11-07 09:46:02.214581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.329 ms 00:17:34.757 [2024-11-07 09:46:02.214588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.214624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.214648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.757 [2024-11-07 09:46:02.214656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:34.757 [2024-11-07 09:46:02.214663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.214679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:34.757 [2024-11-07 09:46:02.214699] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:34.757 [2024-11-07 09:46:02.214733] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:34.757 [2024-11-07 09:46:02.214747] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:34.757 [2024-11-07 09:46:02.214849] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:34.757 [2024-11-07 09:46:02.214859] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.757 [2024-11-07 09:46:02.214869] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:34.757 [2024-11-07 09:46:02.214879] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.757 [2024-11-07 09:46:02.214889] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.757 [2024-11-07 09:46:02.214897] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.757 [2024-11-07 09:46:02.214904] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.757 [2024-11-07 09:46:02.214911] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:34.757 [2024-11-07 09:46:02.214918] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:34.757 [2024-11-07 09:46:02.214925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.214933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.757 [2024-11-07 09:46:02.214940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:34.757 [2024-11-07 09:46:02.214946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.215032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.757 [2024-11-07 09:46:02.215040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.757 [2024-11-07 09:46:02.215049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:34.757 [2024-11-07 09:46:02.215056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.757 [2024-11-07 09:46:02.215167] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.757 [2024-11-07 09:46:02.215177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.757 [2024-11-07 09:46:02.215185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.757 [2024-11-07 09:46:02.215192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.757 [2024-11-07 09:46:02.215199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.757 [2024-11-07 09:46:02.215206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.757 [2024-11-07 09:46:02.215213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.757 [2024-11-07 09:46:02.215219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.757 [2024-11-07 09:46:02.215227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.757 [2024-11-07 09:46:02.215250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.757 [2024-11-07 09:46:02.215257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.757 [2024-11-07 09:46:02.215264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.757 [2024-11-07 09:46:02.215271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.757 [2024-11-07 09:46:02.215283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.757 [2024-11-07 09:46:02.215290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:34.757 [2024-11-07 09:46:02.215296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.757 [2024-11-07 09:46:02.215303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.757 [2024-11-07 09:46:02.215311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:34.757 [2024-11-07 09:46:02.215317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.757 [2024-11-07 09:46:02.215324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.757 [2024-11-07 09:46:02.215330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.757 [2024-11-07 09:46:02.215337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.757 [2024-11-07 09:46:02.215343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.758 [2024-11-07 09:46:02.215350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.758 [2024-11-07 09:46:02.215362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.758 [2024-11-07 09:46:02.215369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.758 [2024-11-07 09:46:02.215382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.758 [2024-11-07 09:46:02.215388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.758 [2024-11-07 09:46:02.215400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.758 [2024-11-07 09:46:02.215406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.758 [2024-11-07 09:46:02.215419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.758 [2024-11-07 09:46:02.215425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:34.758 [2024-11-07 09:46:02.215431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.758 [2024-11-07 09:46:02.215438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:34.758 [2024-11-07 09:46:02.215444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:34.758 [2024-11-07 09:46:02.215450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:34.758 [2024-11-07 09:46:02.215463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:34.758 [2024-11-07 09:46:02.215470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.758 [2024-11-07 09:46:02.215485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.758 [2024-11-07 09:46:02.215492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.758 [2024-11-07 09:46:02.215501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.758 [2024-11-07 09:46:02.215508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.758 [2024-11-07 09:46:02.215514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.758 [2024-11-07 09:46:02.215522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.758 [2024-11-07 09:46:02.215529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.758 [2024-11-07 09:46:02.215536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.758 [2024-11-07 09:46:02.215542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.758 [2024-11-07 09:46:02.215550] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.758 [2024-11-07 09:46:02.215559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.758 [2024-11-07 09:46:02.215575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:34.758 [2024-11-07 09:46:02.215582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.758 [2024-11-07 09:46:02.215589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:34.758 [2024-11-07 09:46:02.215595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:34.758 [2024-11-07 09:46:02.215602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:34.758 [2024-11-07 09:46:02.215609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:34.758 [2024-11-07 09:46:02.215616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:34.758 [2024-11-07 09:46:02.215623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:34.758 [2024-11-07 09:46:02.215641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:34.758 [2024-11-07 09:46:02.215676] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.758 [2024-11-07 09:46:02.215684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.758 [2024-11-07 09:46:02.215699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.758 [2024-11-07 09:46:02.215707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.758 [2024-11-07 09:46:02.215714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.758 [2024-11-07 09:46:02.215722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.215729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.758 [2024-11-07 09:46:02.215739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:17:34.758 [2024-11-07 09:46:02.215745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.241316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.241433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.758 [2024-11-07 09:46:02.241491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.523 ms 00:17:34.758 [2024-11-07 09:46:02.241513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.241662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.241837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.758 [2024-11-07 09:46:02.241863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:34.758 [2024-11-07 09:46:02.241882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.282765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.282895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.758 [2024-11-07 09:46:02.282951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.848 ms 00:17:34.758 [2024-11-07 09:46:02.282978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.283079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.283107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.758 [2024-11-07 09:46:02.283127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.758 [2024-11-07 09:46:02.283189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.283518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.283607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.758 [2024-11-07 09:46:02.283681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:34.758 [2024-11-07 09:46:02.283710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.283847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.283876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.758 [2024-11-07 09:46:02.283929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:34.758 [2024-11-07 09:46:02.283950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.297056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.297157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.758 [2024-11-07 09:46:02.297204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.042 ms 00:17:34.758 [2024-11-07 09:46:02.297226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.309761] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:34.758 [2024-11-07 09:46:02.309881] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:34.758 [2024-11-07 09:46:02.309937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.309957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:34.758 [2024-11-07 09:46:02.309977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:17:34.758 [2024-11-07 09:46:02.309995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.334504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.334627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:34.758 [2024-11-07 09:46:02.334691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.434 ms 00:17:34.758 [2024-11-07 09:46:02.334732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.346054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.346158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:34.758 [2024-11-07 09:46:02.346208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.240 ms 00:17:34.758 [2024-11-07 09:46:02.346229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.758 [2024-11-07 09:46:02.358028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.758 [2024-11-07 09:46:02.358150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:34.759 [2024-11-07 09:46:02.358206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.375 ms 00:17:34.759 [2024-11-07 09:46:02.358228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.759 [2024-11-07 09:46:02.358972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.759 [2024-11-07 09:46:02.359070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.759 [2024-11-07 09:46:02.359126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:34.759 [2024-11-07 09:46:02.359148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.759 [2024-11-07 09:46:02.413572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.759 [2024-11-07 09:46:02.413732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:34.759 [2024-11-07 09:46:02.413787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.386 ms 00:17:34.759 [2024-11-07 09:46:02.413838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.759 [2024-11-07 09:46:02.424031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:35.020 [2024-11-07 09:46:02.437480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.437598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.020 [2024-11-07 09:46:02.437661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.523 ms 00:17:35.020 [2024-11-07 09:46:02.437683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.437780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.437807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:35.020 [2024-11-07 09:46:02.437828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:35.020 [2024-11-07 09:46:02.437846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.437905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.438009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:35.020 [2024-11-07 09:46:02.438029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:35.020 [2024-11-07 09:46:02.438048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.438086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.438109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:35.020 [2024-11-07 09:46:02.438180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.020 [2024-11-07 09:46:02.438203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.438248] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:35.020 [2024-11-07 09:46:02.438271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.438324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:35.020 [2024-11-07 09:46:02.438343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:35.020 [2024-11-07 09:46:02.438362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.461622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.461740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:35.020 [2024-11-07 09:46:02.461788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.196 ms 00:17:35.020 [2024-11-07 09:46:02.461811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.461905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.461931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:35.020 [2024-11-07 09:46:02.461951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:35.020 [2024-11-07 09:46:02.461969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.462814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.020 [2024-11-07 09:46:02.465925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.696 ms, result 0 00:17:35.020 [2024-11-07 09:46:02.466716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.020 [2024-11-07 09:46:02.479876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.020  [2024-11-07T09:46:02.691Z] Copying: 4096/4096 [kB] (average 42 MBps)[2024-11-07 09:46:02.577529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.020 [2024-11-07 09:46:02.586177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.586210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.020 [2024-11-07 09:46:02.586221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:35.020 [2024-11-07 09:46:02.586233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.586264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.020 [2024-11-07 09:46:02.588856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.588882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.020 [2024-11-07 09:46:02.588893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:17:35.020 [2024-11-07 09:46:02.588900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.590477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.590508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.020 [2024-11-07 09:46:02.590518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:17:35.020 [2024-11-07 09:46:02.590525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.594346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.594376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.020 [2024-11-07 09:46:02.594386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:17:35.020 [2024-11-07 09:46:02.594395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.601725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.601751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.020 [2024-11-07 09:46:02.601761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:17:35.020 [2024-11-07 09:46:02.601770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.625083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.625114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.020 [2024-11-07 09:46:02.625124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.263 ms 00:17:35.020 [2024-11-07 09:46:02.625131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.639239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.639273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.020 [2024-11-07 09:46:02.639287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.069 ms 00:17:35.020 [2024-11-07 09:46:02.639294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.639424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.639433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.020 [2024-11-07 09:46:02.639441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:35.020 [2024-11-07 09:46:02.639450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.663457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.663574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.020 [2024-11-07 09:46:02.663589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.986 ms 00:17:35.020 [2024-11-07 09:46:02.663596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.020 [2024-11-07 09:46:02.687190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.020 [2024-11-07 09:46:02.687218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.020 [2024-11-07 09:46:02.687229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.565 ms 00:17:35.020 [2024-11-07 09:46:02.687242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.295 [2024-11-07 09:46:02.710143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.295 [2024-11-07 09:46:02.710256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.295 [2024-11-07 09:46:02.710270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.868 ms 00:17:35.295 [2024-11-07 09:46:02.710277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.295 [2024-11-07 09:46:02.733447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.296 [2024-11-07 09:46:02.733561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.296 [2024-11-07 09:46:02.733577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.116 ms 00:17:35.296 [2024-11-07 09:46:02.733584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.296 [2024-11-07 09:46:02.733614] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.296 [2024-11-07 09:46:02.733642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.733999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.296 [2024-11-07 09:46:02.734263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.297 [2024-11-07 09:46:02.734416] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.297 [2024-11-07 09:46:02.734423] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:35.297 [2024-11-07 09:46:02.734432] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.297 [2024-11-07 09:46:02.734439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.297 [2024-11-07 09:46:02.734446] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.297 [2024-11-07 09:46:02.734453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.297 [2024-11-07 09:46:02.734460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.297 [2024-11-07 09:46:02.734468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.297 [2024-11-07 09:46:02.734475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.297 [2024-11-07 09:46:02.734481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.297 [2024-11-07 09:46:02.734487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.297 [2024-11-07 09:46:02.734494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.297 [2024-11-07 09:46:02.734504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.297 [2024-11-07 09:46:02.734512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:17:35.297 [2024-11-07 09:46:02.734519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.746846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.297 [2024-11-07 09:46:02.746874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.297 [2024-11-07 09:46:02.746884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.300 ms 00:17:35.297 [2024-11-07 09:46:02.746892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.747246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.297 [2024-11-07 09:46:02.747256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.297 [2024-11-07 09:46:02.747264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:35.297 [2024-11-07 09:46:02.747271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.781989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.782022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.297 [2024-11-07 09:46:02.782033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.782041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.782111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.782120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.297 [2024-11-07 09:46:02.782127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.782134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.782174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.782183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.297 [2024-11-07 09:46:02.782190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.782197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.782214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.782225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.297 [2024-11-07 09:46:02.782232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.782239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.859090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.859126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.297 [2024-11-07 09:46:02.859137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.859144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.297 [2024-11-07 09:46:02.921107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.297 [2024-11-07 09:46:02.921175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.297 [2024-11-07 09:46:02.921229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.297 [2024-11-07 09:46:02.921338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.297 [2024-11-07 09:46:02.921391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.297 [2024-11-07 09:46:02.921452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.297 [2024-11-07 09:46:02.921507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.297 [2024-11-07 09:46:02.921517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.297 [2024-11-07 09:46:02.921525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.297 [2024-11-07 09:46:02.921678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.450 ms, result 0 00:17:36.240 00:17:36.240 00:17:36.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.240 09:46:03 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74127 00:17:36.240 09:46:03 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74127 00:17:36.240 09:46:03 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:36.240 09:46:03 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74127 ']' 00:17:36.240 09:46:03 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.240 09:46:03 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:36.240 09:46:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.240 09:46:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:36.240 09:46:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:36.240 [2024-11-07 09:46:03.681731] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:36.240 [2024-11-07 09:46:03.682345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74127 ] 00:17:36.240 [2024-11-07 09:46:03.838794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.498 [2024-11-07 09:46:03.936524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.069 09:46:04 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:37.069 09:46:04 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:37.069 09:46:04 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:37.069 [2024-11-07 09:46:04.730566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.069 [2024-11-07 09:46:04.730753] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.330 [2024-11-07 09:46:04.904575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.904751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:37.330 [2024-11-07 09:46:04.904828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.330 [2024-11-07 09:46:04.904854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.907491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.907600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.330 [2024-11-07 09:46:04.907670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:17:37.330 [2024-11-07 09:46:04.907695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.907785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:37.330 [2024-11-07 09:46:04.908476] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:37.330 [2024-11-07 09:46:04.908499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.908507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.330 [2024-11-07 09:46:04.908517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:17:37.330 [2024-11-07 09:46:04.908525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.909911] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:37.330 [2024-11-07 09:46:04.922436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.922473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:37.330 [2024-11-07 09:46:04.922486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.530 ms 00:17:37.330 [2024-11-07 09:46:04.922495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.922573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.922585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:37.330 [2024-11-07 09:46:04.922594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:37.330 [2024-11-07 09:46:04.922603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.927339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.927372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.330 [2024-11-07 09:46:04.927383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:17:37.330 [2024-11-07 09:46:04.927393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.927484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.927495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.330 [2024-11-07 09:46:04.927503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:37.330 [2024-11-07 09:46:04.927512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.927541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.927551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:37.330 [2024-11-07 09:46:04.927558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:37.330 [2024-11-07 09:46:04.927567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.927591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:37.330 [2024-11-07 09:46:04.931121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.931256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.330 [2024-11-07 09:46:04.931275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:17:37.330 [2024-11-07 09:46:04.931283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.931321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.931329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:37.330 [2024-11-07 09:46:04.931339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:37.330 [2024-11-07 09:46:04.931348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.931377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:37.330 [2024-11-07 09:46:04.931393] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:37.330 [2024-11-07 09:46:04.931432] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:37.330 [2024-11-07 09:46:04.931448] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:37.330 [2024-11-07 09:46:04.931553] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:37.330 [2024-11-07 09:46:04.931563] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:37.330 [2024-11-07 09:46:04.931577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:37.330 [2024-11-07 09:46:04.931587] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:37.330 [2024-11-07 09:46:04.931597] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:37.330 [2024-11-07 09:46:04.931605] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:37.330 [2024-11-07 09:46:04.931614] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:37.330 [2024-11-07 09:46:04.931621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:37.330 [2024-11-07 09:46:04.931641] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:37.330 [2024-11-07 09:46:04.931649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.931658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:37.330 [2024-11-07 09:46:04.931665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:37.330 [2024-11-07 09:46:04.931674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.931773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.330 [2024-11-07 09:46:04.931783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:37.330 [2024-11-07 09:46:04.931791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:37.330 [2024-11-07 09:46:04.931800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.330 [2024-11-07 09:46:04.931903] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:37.330 [2024-11-07 09:46:04.931914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:37.330 [2024-11-07 09:46:04.931923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.330 [2024-11-07 09:46:04.931932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.330 [2024-11-07 09:46:04.931939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:37.330 [2024-11-07 09:46:04.931947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:37.330 [2024-11-07 09:46:04.931954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:37.330 [2024-11-07 09:46:04.931965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:37.330 [2024-11-07 09:46:04.931973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:37.330 [2024-11-07 09:46:04.931981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.330 [2024-11-07 09:46:04.931988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:37.330 [2024-11-07 09:46:04.931996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:37.330 [2024-11-07 09:46:04.932003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.330 [2024-11-07 09:46:04.932011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:37.330 [2024-11-07 09:46:04.932019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:37.330 [2024-11-07 09:46:04.932028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.330 [2024-11-07 09:46:04.932034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:37.330 [2024-11-07 09:46:04.932042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:37.330 [2024-11-07 09:46:04.932049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.330 [2024-11-07 09:46:04.932057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:37.330 [2024-11-07 09:46:04.932068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:37.330 [2024-11-07 09:46:04.932076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.330 [2024-11-07 09:46:04.932082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:37.331 [2024-11-07 09:46:04.932092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.331 [2024-11-07 09:46:04.932106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:37.331 [2024-11-07 09:46:04.932112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.331 [2024-11-07 09:46:04.932127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:37.331 [2024-11-07 09:46:04.932134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.331 [2024-11-07 09:46:04.932149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:37.331 [2024-11-07 09:46:04.932156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.331 [2024-11-07 09:46:04.932170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:37.331 [2024-11-07 09:46:04.932178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:37.331 [2024-11-07 09:46:04.932184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.331 [2024-11-07 09:46:04.932192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:37.331 [2024-11-07 09:46:04.932198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:37.331 [2024-11-07 09:46:04.932207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:37.331 [2024-11-07 09:46:04.932222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:37.331 [2024-11-07 09:46:04.932229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:37.331 [2024-11-07 09:46:04.932245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:37.331 [2024-11-07 09:46:04.932256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.331 [2024-11-07 09:46:04.932263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.331 [2024-11-07 09:46:04.932272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:37.331 [2024-11-07 09:46:04.932279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:37.331 [2024-11-07 09:46:04.932286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:37.331 [2024-11-07 09:46:04.932293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:37.331 [2024-11-07 09:46:04.932301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:37.331 [2024-11-07 09:46:04.932307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:37.331 [2024-11-07 09:46:04.932317] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:37.331 [2024-11-07 09:46:04.932326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:37.331 [2024-11-07 09:46:04.932345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:37.331 [2024-11-07 09:46:04.932354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:37.331 [2024-11-07 09:46:04.932361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:37.331 [2024-11-07 09:46:04.932369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:37.331 [2024-11-07 09:46:04.932376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:37.331 [2024-11-07 09:46:04.932384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:37.331 [2024-11-07 09:46:04.932391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:37.331 [2024-11-07 09:46:04.932399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:37.331 [2024-11-07 09:46:04.932406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:37.331 [2024-11-07 09:46:04.932445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:37.331 [2024-11-07 09:46:04.932453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:37.331 [2024-11-07 09:46:04.932471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:37.331 [2024-11-07 09:46:04.932480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:37.331 [2024-11-07 09:46:04.932487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:37.331 [2024-11-07 09:46:04.932496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.932503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:37.331 [2024-11-07 09:46:04.932512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:17:37.331 [2024-11-07 09:46:04.932519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.331 [2024-11-07 09:46:04.958125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.958251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.331 [2024-11-07 09:46:04.958271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.535 ms 00:17:37.331 [2024-11-07 09:46:04.958280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.331 [2024-11-07 09:46:04.958400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.958410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:37.331 [2024-11-07 09:46:04.958420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:37.331 [2024-11-07 09:46:04.958428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.331 [2024-11-07 09:46:04.988530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.988560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.331 [2024-11-07 09:46:04.988575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.080 ms 00:17:37.331 [2024-11-07 09:46:04.988583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.331 [2024-11-07 09:46:04.988652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.988662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.331 [2024-11-07 09:46:04.988672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.331 [2024-11-07 09:46:04.988680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.331 [2024-11-07 09:46:04.988991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.989016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.331 [2024-11-07 09:46:04.989026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:37.331 [2024-11-07 09:46:04.989036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.331 [2024-11-07 09:46:04.989157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.331 [2024-11-07 09:46:04.989169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.331 [2024-11-07 09:46:04.989179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:37.331 [2024-11-07 09:46:04.989186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.003455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.003485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.593 [2024-11-07 09:46:05.003501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.246 ms 00:17:37.593 [2024-11-07 09:46:05.003510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.015913] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:37.593 [2024-11-07 09:46:05.015947] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:37.593 [2024-11-07 09:46:05.015960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.015969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:37.593 [2024-11-07 09:46:05.015979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.344 ms 00:17:37.593 [2024-11-07 09:46:05.015986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.039693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.039725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:37.593 [2024-11-07 09:46:05.039737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:17:37.593 [2024-11-07 09:46:05.039745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.051510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.051621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:37.593 [2024-11-07 09:46:05.051655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.698 ms 00:17:37.593 [2024-11-07 09:46:05.051663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.062993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.063019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:37.593 [2024-11-07 09:46:05.063031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.271 ms 00:17:37.593 [2024-11-07 09:46:05.063038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.063661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.063678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.593 [2024-11-07 09:46:05.063689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:17:37.593 [2024-11-07 09:46:05.063696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.132189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.132376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:37.593 [2024-11-07 09:46:05.132400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.467 ms 00:17:37.593 [2024-11-07 09:46:05.132410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.142916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:37.593 [2024-11-07 09:46:05.156509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.156553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.593 [2024-11-07 09:46:05.156569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.007 ms 00:17:37.593 [2024-11-07 09:46:05.156580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.156679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.156692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:37.593 [2024-11-07 09:46:05.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:37.593 [2024-11-07 09:46:05.156710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.156757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.156768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.593 [2024-11-07 09:46:05.156776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:37.593 [2024-11-07 09:46:05.156785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.156809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.156819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.593 [2024-11-07 09:46:05.156827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.593 [2024-11-07 09:46:05.156843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.156873] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:37.593 [2024-11-07 09:46:05.156886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.156893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:37.593 [2024-11-07 09:46:05.156904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:37.593 [2024-11-07 09:46:05.156910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.180241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.180275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.593 [2024-11-07 09:46:05.180288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.306 ms 00:17:37.593 [2024-11-07 09:46:05.180296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.593 [2024-11-07 09:46:05.180380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.593 [2024-11-07 09:46:05.180391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.593 [2024-11-07 09:46:05.180401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:37.593 [2024-11-07 09:46:05.180411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.594 [2024-11-07 09:46:05.181156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.594 [2024-11-07 09:46:05.184019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.303 ms, result 0 00:17:37.594 [2024-11-07 09:46:05.185624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.594 Some configs were skipped because the RPC state that can call them passed over. 00:17:37.594 09:46:05 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:37.855 [2024-11-07 09:46:05.413866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.855 [2024-11-07 09:46:05.414031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:37.855 [2024-11-07 09:46:05.414090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:17:37.855 [2024-11-07 09:46:05.414118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.855 [2024-11-07 09:46:05.414169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.281 ms, result 0 00:17:37.855 true 00:17:37.855 09:46:05 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:38.115 [2024-11-07 09:46:05.605800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.115 [2024-11-07 09:46:05.605947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:38.115 [2024-11-07 09:46:05.606007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:17:38.115 [2024-11-07 09:46:05.606031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.115 [2024-11-07 09:46:05.606085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.948 ms, result 0 00:17:38.115 true 00:17:38.115 09:46:05 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74127 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74127 ']' 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74127 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74127 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:38.115 killing process with pid 74127 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74127' 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74127 00:17:38.115 09:46:05 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74127 00:17:39.058 [2024-11-07 09:46:06.399988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.400047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.058 [2024-11-07 09:46:06.400060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.058 [2024-11-07 09:46:06.400070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.400095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.058 [2024-11-07 09:46:06.402681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.402712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.058 [2024-11-07 09:46:06.402726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:17:39.058 [2024-11-07 09:46:06.402735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.403028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.403199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.058 [2024-11-07 09:46:06.403222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:39.058 [2024-11-07 09:46:06.403229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.407886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.407978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.058 [2024-11-07 09:46:06.408328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:17:39.058 [2024-11-07 09:46:06.408409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.415389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.415485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.058 [2024-11-07 09:46:06.415540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:17:39.058 [2024-11-07 09:46:06.415562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.425448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.425547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.058 [2024-11-07 09:46:06.425603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.817 ms 00:17:39.058 [2024-11-07 09:46:06.425647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.433079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.433178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.058 [2024-11-07 09:46:06.433236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.383 ms 00:17:39.058 [2024-11-07 09:46:06.433259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.433403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.433429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.058 [2024-11-07 09:46:06.433482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:39.058 [2024-11-07 09:46:06.433503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.444024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.444114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:39.058 [2024-11-07 09:46:06.444163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.487 ms 00:17:39.058 [2024-11-07 09:46:06.444184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.453801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.453896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:39.058 [2024-11-07 09:46:06.453951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.531 ms 00:17:39.058 [2024-11-07 09:46:06.453973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.463591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.463722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.058 [2024-11-07 09:46:06.463780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.324 ms 00:17:39.058 [2024-11-07 09:46:06.463824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.473387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.058 [2024-11-07 09:46:06.473487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.058 [2024-11-07 09:46:06.473536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.441 ms 00:17:39.058 [2024-11-07 09:46:06.473558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-11-07 09:46:06.473599] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.058 [2024-11-07 09:46:06.473636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.473671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.473700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.473730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.473796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.473850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.473883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.474993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.058 [2024-11-07 09:46:06.475306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.475948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.059 [2024-11-07 09:46:06.476657] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.059 [2024-11-07 09:46:06.476670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:39.059 [2024-11-07 09:46:06.476684] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.059 [2024-11-07 09:46:06.476695] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.059 [2024-11-07 09:46:06.476701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.059 [2024-11-07 09:46:06.476710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.059 [2024-11-07 09:46:06.476717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.059 [2024-11-07 09:46:06.476726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.059 [2024-11-07 09:46:06.476733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.059 [2024-11-07 09:46:06.476740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.059 [2024-11-07 09:46:06.476746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.059 [2024-11-07 09:46:06.476756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.059 [2024-11-07 09:46:06.476764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.059 [2024-11-07 09:46:06.476774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:17:39.059 [2024-11-07 09:46:06.476782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.059 [2024-11-07 09:46:06.489349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.059 [2024-11-07 09:46:06.489441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.059 [2024-11-07 09:46:06.489520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.516 ms 00:17:39.059 [2024-11-07 09:46:06.489545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.059 [2024-11-07 09:46:06.489969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.059 [2024-11-07 09:46:06.490052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.059 [2024-11-07 09:46:06.490103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:17:39.059 [2024-11-07 09:46:06.490127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.533745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.533863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.060 [2024-11-07 09:46:06.533913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.533935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.535171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.535263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.060 [2024-11-07 09:46:06.535312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.535336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.535400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.535423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.060 [2024-11-07 09:46:06.535445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.535463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.535493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.535512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.060 [2024-11-07 09:46:06.535533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.535585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.608557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.608681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.060 [2024-11-07 09:46:06.608722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.608741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.656701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.656851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.060 [2024-11-07 09:46:06.656891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.656910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.656991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.657009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.060 [2024-11-07 09:46:06.657027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.657041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.657074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.657089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.060 [2024-11-07 09:46:06.657106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.657159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.657252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.657271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.060 [2024-11-07 09:46:06.657287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.657301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.657338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.657426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.060 [2024-11-07 09:46:06.657442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.657457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.657497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.657515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.060 [2024-11-07 09:46:06.657567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.657585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.657642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.060 [2024-11-07 09:46:06.657662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.060 [2024-11-07 09:46:06.657712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.060 [2024-11-07 09:46:06.657729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.060 [2024-11-07 09:46:06.657848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.845 ms, result 0 00:17:39.629 09:46:07 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.629 [2024-11-07 09:46:07.229086] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:39.629 [2024-11-07 09:46:07.229380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74174 ] 00:17:39.887 [2024-11-07 09:46:07.385508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.887 [2024-11-07 09:46:07.464123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.147 [2024-11-07 09:46:07.671503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.147 [2024-11-07 09:46:07.671723] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.409 [2024-11-07 09:46:07.820189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.820389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.409 [2024-11-07 09:46:07.820740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.409 [2024-11-07 09:46:07.820785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.823557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.823684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.409 [2024-11-07 09:46:07.823758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:17:40.409 [2024-11-07 09:46:07.823784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.823900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.409 [2024-11-07 09:46:07.824719] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.409 [2024-11-07 09:46:07.824864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.824889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.409 [2024-11-07 09:46:07.824909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:17:40.409 [2024-11-07 09:46:07.824927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.826109] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.409 [2024-11-07 09:46:07.838872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.838990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.409 [2024-11-07 09:46:07.839053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.765 ms 00:17:40.409 [2024-11-07 09:46:07.839082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.839586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.839839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.409 [2024-11-07 09:46:07.840072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:40.409 [2024-11-07 09:46:07.840102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.846544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.846727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.409 [2024-11-07 09:46:07.846825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.336 ms 00:17:40.409 [2024-11-07 09:46:07.846871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.847067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.847121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.409 [2024-11-07 09:46:07.847164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:40.409 [2024-11-07 09:46:07.847266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.847327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.847351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.409 [2024-11-07 09:46:07.847368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:40.409 [2024-11-07 09:46:07.847382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.847429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.409 [2024-11-07 09:46:07.852826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.852852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.409 [2024-11-07 09:46:07.852861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.408 ms 00:17:40.409 [2024-11-07 09:46:07.852869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.852929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.409 [2024-11-07 09:46:07.852938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.409 [2024-11-07 09:46:07.852947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:40.409 [2024-11-07 09:46:07.852954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.409 [2024-11-07 09:46:07.852971] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.409 [2024-11-07 09:46:07.852990] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.409 [2024-11-07 09:46:07.853024] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.409 [2024-11-07 09:46:07.853039] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.410 [2024-11-07 09:46:07.853143] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.410 [2024-11-07 09:46:07.853153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.410 [2024-11-07 09:46:07.853163] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.410 [2024-11-07 09:46:07.853172] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853183] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853191] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.410 [2024-11-07 09:46:07.853199] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.410 [2024-11-07 09:46:07.853207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.410 [2024-11-07 09:46:07.853213] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.410 [2024-11-07 09:46:07.853221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.410 [2024-11-07 09:46:07.853228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.410 [2024-11-07 09:46:07.853235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:17:40.410 [2024-11-07 09:46:07.853242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.410 [2024-11-07 09:46:07.853333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.410 [2024-11-07 09:46:07.853341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.410 [2024-11-07 09:46:07.853350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:40.410 [2024-11-07 09:46:07.853357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.410 [2024-11-07 09:46:07.853456] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.410 [2024-11-07 09:46:07.853465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.410 [2024-11-07 09:46:07.853473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.410 [2024-11-07 09:46:07.853494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.410 [2024-11-07 09:46:07.853515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.410 [2024-11-07 09:46:07.853528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.410 [2024-11-07 09:46:07.853534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.410 [2024-11-07 09:46:07.853541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.410 [2024-11-07 09:46:07.853553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.410 [2024-11-07 09:46:07.853560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.410 [2024-11-07 09:46:07.853566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.410 [2024-11-07 09:46:07.853580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.410 [2024-11-07 09:46:07.853600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.410 [2024-11-07 09:46:07.853619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.410 [2024-11-07 09:46:07.853657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.410 [2024-11-07 09:46:07.853677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.410 [2024-11-07 09:46:07.853696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.410 [2024-11-07 09:46:07.853709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.410 [2024-11-07 09:46:07.853716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.410 [2024-11-07 09:46:07.853722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.410 [2024-11-07 09:46:07.853729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.410 [2024-11-07 09:46:07.853736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.410 [2024-11-07 09:46:07.853742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.410 [2024-11-07 09:46:07.853755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.410 [2024-11-07 09:46:07.853762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853768] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.410 [2024-11-07 09:46:07.853775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.410 [2024-11-07 09:46:07.853782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.410 [2024-11-07 09:46:07.853799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.410 [2024-11-07 09:46:07.853806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.410 [2024-11-07 09:46:07.853812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.410 [2024-11-07 09:46:07.853819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.410 [2024-11-07 09:46:07.853825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.410 [2024-11-07 09:46:07.853832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.410 [2024-11-07 09:46:07.853840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.410 [2024-11-07 09:46:07.853848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.410 [2024-11-07 09:46:07.853863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.410 [2024-11-07 09:46:07.853870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.410 [2024-11-07 09:46:07.853877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.410 [2024-11-07 09:46:07.853884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.410 [2024-11-07 09:46:07.853891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.410 [2024-11-07 09:46:07.853898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.410 [2024-11-07 09:46:07.853915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.410 [2024-11-07 09:46:07.853922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.410 [2024-11-07 09:46:07.853929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.410 [2024-11-07 09:46:07.853963] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.410 [2024-11-07 09:46:07.853972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.410 [2024-11-07 09:46:07.853986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.410 [2024-11-07 09:46:07.853993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.410 [2024-11-07 09:46:07.854000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.410 [2024-11-07 09:46:07.854007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.410 [2024-11-07 09:46:07.854014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.410 [2024-11-07 09:46:07.854025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:17:40.411 [2024-11-07 09:46:07.854032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.879617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.879659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.411 [2024-11-07 09:46:07.879670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.535 ms 00:17:40.411 [2024-11-07 09:46:07.879677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.879805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.879820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.411 [2024-11-07 09:46:07.879828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:40.411 [2024-11-07 09:46:07.879835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.918029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.918171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.411 [2024-11-07 09:46:07.918190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.172 ms 00:17:40.411 [2024-11-07 09:46:07.918202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.918311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.918324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.411 [2024-11-07 09:46:07.918332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.411 [2024-11-07 09:46:07.918340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.918680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.918695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.411 [2024-11-07 09:46:07.918704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:40.411 [2024-11-07 09:46:07.918715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.918848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.918857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.411 [2024-11-07 09:46:07.918865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:40.411 [2024-11-07 09:46:07.918872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.932050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.932079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.411 [2024-11-07 09:46:07.932089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.159 ms 00:17:40.411 [2024-11-07 09:46:07.932096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.944582] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:40.411 [2024-11-07 09:46:07.944614] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.411 [2024-11-07 09:46:07.944626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.944645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.411 [2024-11-07 09:46:07.944654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:17:40.411 [2024-11-07 09:46:07.944661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.968946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.968989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.411 [2024-11-07 09:46:07.969000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.211 ms 00:17:40.411 [2024-11-07 09:46:07.969008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.981072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.981102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.411 [2024-11-07 09:46:07.981112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.987 ms 00:17:40.411 [2024-11-07 09:46:07.981119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.992567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.992704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.411 [2024-11-07 09:46:07.992720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.384 ms 00:17:40.411 [2024-11-07 09:46:07.992727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:07.993334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:07.993354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.411 [2024-11-07 09:46:07.993363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:17:40.411 [2024-11-07 09:46:07.993370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:08.047784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:08.047965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:40.411 [2024-11-07 09:46:08.047984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.391 ms 00:17:40.411 [2024-11-07 09:46:08.047993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:08.058208] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.411 [2024-11-07 09:46:08.072024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:08.072061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.411 [2024-11-07 09:46:08.072074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.938 ms 00:17:40.411 [2024-11-07 09:46:08.072083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:08.072171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:08.072181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:40.411 [2024-11-07 09:46:08.072189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:40.411 [2024-11-07 09:46:08.072197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:08.072242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:08.072250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.411 [2024-11-07 09:46:08.072258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:40.411 [2024-11-07 09:46:08.072265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:08.072293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:08.072303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.411 [2024-11-07 09:46:08.072311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.411 [2024-11-07 09:46:08.072319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.411 [2024-11-07 09:46:08.072347] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:40.411 [2024-11-07 09:46:08.072356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.411 [2024-11-07 09:46:08.072364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:40.411 [2024-11-07 09:46:08.072372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:40.411 [2024-11-07 09:46:08.072379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.672 [2024-11-07 09:46:08.096069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.672 [2024-11-07 09:46:08.096108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.672 [2024-11-07 09:46:08.096120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.668 ms 00:17:40.672 [2024-11-07 09:46:08.096128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.672 [2024-11-07 09:46:08.096220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.672 [2024-11-07 09:46:08.096231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.672 [2024-11-07 09:46:08.096239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:40.672 [2024-11-07 09:46:08.096246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.672 [2024-11-07 09:46:08.097009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.672 [2024-11-07 09:46:08.099881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.553 ms, result 0 00:17:40.672 [2024-11-07 09:46:08.101245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.672 [2024-11-07 09:46:08.114073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.615  [2024-11-07T09:46:10.278Z] Copying: 20/256 [MB] (20 MBps) [2024-11-07T09:46:11.222Z] Copying: 35/256 [MB] (14 MBps) [2024-11-07T09:46:12.606Z] Copying: 53/256 [MB] (18 MBps) [2024-11-07T09:46:13.172Z] Copying: 73/256 [MB] (19 MBps) [2024-11-07T09:46:14.554Z] Copying: 113/256 [MB] (39 MBps) [2024-11-07T09:46:15.487Z] Copying: 154/256 [MB] (41 MBps) [2024-11-07T09:46:16.426Z] Copying: 198/256 [MB] (43 MBps) [2024-11-07T09:46:16.697Z] Copying: 241/256 [MB] (42 MBps) [2024-11-07T09:46:17.265Z] Copying: 256/256 [MB] (average 30 MBps)[2024-11-07 09:46:17.003373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.594 [2024-11-07 09:46:17.015206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.015362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.594 [2024-11-07 09:46:17.015425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:49.594 [2024-11-07 09:46:17.015443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.015470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.594 [2024-11-07 09:46:17.018159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.018234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.594 [2024-11-07 09:46:17.018288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:17:49.594 [2024-11-07 09:46:17.018310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.018586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.018662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.594 [2024-11-07 09:46:17.018774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:49.594 [2024-11-07 09:46:17.019504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.023273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.023359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.594 [2024-11-07 09:46:17.023411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:17:49.594 [2024-11-07 09:46:17.023433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.030360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.030455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.594 [2024-11-07 09:46:17.030509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:17:49.594 [2024-11-07 09:46:17.030531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.053575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.053712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.594 [2024-11-07 09:46:17.053761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.975 ms 00:17:49.594 [2024-11-07 09:46:17.053783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.067868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.067986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.594 [2024-11-07 09:46:17.068034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:17:49.594 [2024-11-07 09:46:17.068061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.068202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.068227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.594 [2024-11-07 09:46:17.068247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:49.594 [2024-11-07 09:46:17.068297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.091153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.091289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.594 [2024-11-07 09:46:17.091337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.815 ms 00:17:49.594 [2024-11-07 09:46:17.091358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.114201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.114322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.594 [2024-11-07 09:46:17.114370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.808 ms 00:17:49.594 [2024-11-07 09:46:17.114391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.136634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.136748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.594 [2024-11-07 09:46:17.136795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.203 ms 00:17:49.594 [2024-11-07 09:46:17.136817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.158741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-11-07 09:46:17.158875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.594 [2024-11-07 09:46:17.158929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.865 ms 00:17:49.594 [2024-11-07 09:46:17.158950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-11-07 09:46:17.158983] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.594 [2024-11-07 09:46:17.159010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.159992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.160000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.594 [2024-11-07 09:46:17.160008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.595 [2024-11-07 09:46:17.160570] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.595 [2024-11-07 09:46:17.160578] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5c1cbb46-fd78-473b-92db-2b008b37049d 00:17:49.595 [2024-11-07 09:46:17.160585] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.595 [2024-11-07 09:46:17.160592] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.595 [2024-11-07 09:46:17.160599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.595 [2024-11-07 09:46:17.160607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.595 [2024-11-07 09:46:17.160613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.595 [2024-11-07 09:46:17.160621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.595 [2024-11-07 09:46:17.161058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.595 [2024-11-07 09:46:17.161109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.595 [2024-11-07 09:46:17.161130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.595 [2024-11-07 09:46:17.161192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.595 [2024-11-07 09:46:17.161223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.595 [2024-11-07 09:46:17.161244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:17:49.595 [2024-11-07 09:46:17.161286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.595 [2024-11-07 09:46:17.173607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.595 [2024-11-07 09:46:17.173659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.595 [2024-11-07 09:46:17.173671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.257 ms 00:17:49.595 [2024-11-07 09:46:17.173679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.595 [2024-11-07 09:46:17.174051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.596 [2024-11-07 09:46:17.174071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.596 [2024-11-07 09:46:17.174081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:17:49.596 [2024-11-07 09:46:17.174089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.596 [2024-11-07 09:46:17.208935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.596 [2024-11-07 09:46:17.208981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.596 [2024-11-07 09:46:17.208992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.596 [2024-11-07 09:46:17.208999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.596 [2024-11-07 09:46:17.209102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.596 [2024-11-07 09:46:17.209112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.596 [2024-11-07 09:46:17.209120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.596 [2024-11-07 09:46:17.209127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.596 [2024-11-07 09:46:17.209172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.596 [2024-11-07 09:46:17.209182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.596 [2024-11-07 09:46:17.209189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.596 [2024-11-07 09:46:17.209196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.596 [2024-11-07 09:46:17.209212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.596 [2024-11-07 09:46:17.209223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.596 [2024-11-07 09:46:17.209230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.596 [2024-11-07 09:46:17.209238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.286273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.286332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.855 [2024-11-07 09:46:17.286343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.286350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.349744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.349797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.855 [2024-11-07 09:46:17.349808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.349816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.349882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.349890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.855 [2024-11-07 09:46:17.349898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.349905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.349932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.349941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.855 [2024-11-07 09:46:17.349951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.349958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.350042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.350051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.855 [2024-11-07 09:46:17.350059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.350066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.350099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.350108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.855 [2024-11-07 09:46:17.350116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.350125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.350161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.350170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.855 [2024-11-07 09:46:17.350177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.350184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.350225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.855 [2024-11-07 09:46:17.350234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.855 [2024-11-07 09:46:17.350244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.855 [2024-11-07 09:46:17.350252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.855 [2024-11-07 09:46:17.350377] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.174 ms, result 0 00:17:50.446 00:17:50.446 00:17:50.446 09:46:18 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:51.013 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:51.013 Process with pid 74127 is not found 00:17:51.013 09:46:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74127 00:17:51.013 09:46:18 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74127 ']' 00:17:51.013 09:46:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74127 00:17:51.013 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74127) - No such process 00:17:51.013 09:46:18 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74127 is not found' 00:17:51.013 ************************************ 00:17:51.013 END TEST ftl_trim 00:17:51.013 ************************************ 00:17:51.013 00:17:51.013 real 0m53.398s 00:17:51.013 user 1m10.008s 00:17:51.013 sys 0m15.531s 00:17:51.013 09:46:18 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:51.013 09:46:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:51.013 09:46:18 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:51.013 09:46:18 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:51.013 09:46:18 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:51.013 09:46:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:51.272 ************************************ 00:17:51.272 START TEST ftl_restore 00:17:51.272 ************************************ 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:51.272 * Looking for test storage... 00:17:51.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.272 09:46:18 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:51.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.272 --rc genhtml_branch_coverage=1 00:17:51.272 --rc genhtml_function_coverage=1 00:17:51.272 --rc genhtml_legend=1 00:17:51.272 --rc geninfo_all_blocks=1 00:17:51.272 --rc geninfo_unexecuted_blocks=1 00:17:51.272 00:17:51.272 ' 00:17:51.272 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:51.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.273 --rc genhtml_branch_coverage=1 00:17:51.273 --rc genhtml_function_coverage=1 00:17:51.273 --rc genhtml_legend=1 00:17:51.273 --rc geninfo_all_blocks=1 00:17:51.273 --rc geninfo_unexecuted_blocks=1 00:17:51.273 00:17:51.273 ' 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.273 --rc genhtml_branch_coverage=1 00:17:51.273 --rc genhtml_function_coverage=1 00:17:51.273 --rc genhtml_legend=1 00:17:51.273 --rc geninfo_all_blocks=1 00:17:51.273 --rc geninfo_unexecuted_blocks=1 00:17:51.273 00:17:51.273 ' 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.273 --rc genhtml_branch_coverage=1 00:17:51.273 --rc genhtml_function_coverage=1 00:17:51.273 --rc genhtml_legend=1 00:17:51.273 --rc geninfo_all_blocks=1 00:17:51.273 --rc geninfo_unexecuted_blocks=1 00:17:51.273 00:17:51.273 ' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IC0sGp2gJT 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74362 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74362 00:17:51.273 09:46:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74362 ']' 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:51.273 09:46:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:51.531 [2024-11-07 09:46:18.958207] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:51.531 [2024-11-07 09:46:18.958324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74362 ] 00:17:51.531 [2024-11-07 09:46:19.118104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.790 [2024-11-07 09:46:19.218636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.365 09:46:19 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:52.365 09:46:19 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:17:52.365 09:46:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:52.365 09:46:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:52.365 09:46:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:52.365 09:46:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:52.365 09:46:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:52.366 09:46:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:52.623 09:46:20 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:52.623 09:46:20 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:52.623 09:46:20 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:52.623 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:52.624 { 00:17:52.624 "name": "nvme0n1", 00:17:52.624 "aliases": [ 00:17:52.624 "2cfa1e59-459d-4e22-85cd-a4eb7b275610" 00:17:52.624 ], 00:17:52.624 "product_name": "NVMe disk", 00:17:52.624 "block_size": 4096, 00:17:52.624 "num_blocks": 1310720, 00:17:52.624 "uuid": "2cfa1e59-459d-4e22-85cd-a4eb7b275610", 00:17:52.624 "numa_id": -1, 00:17:52.624 "assigned_rate_limits": { 00:17:52.624 "rw_ios_per_sec": 0, 00:17:52.624 "rw_mbytes_per_sec": 0, 00:17:52.624 "r_mbytes_per_sec": 0, 00:17:52.624 "w_mbytes_per_sec": 0 00:17:52.624 }, 00:17:52.624 "claimed": true, 00:17:52.624 "claim_type": "read_many_write_one", 00:17:52.624 "zoned": false, 00:17:52.624 "supported_io_types": { 00:17:52.624 "read": true, 00:17:52.624 "write": true, 00:17:52.624 "unmap": true, 00:17:52.624 "flush": true, 00:17:52.624 "reset": true, 00:17:52.624 "nvme_admin": true, 00:17:52.624 "nvme_io": true, 00:17:52.624 "nvme_io_md": false, 00:17:52.624 "write_zeroes": true, 00:17:52.624 "zcopy": false, 00:17:52.624 "get_zone_info": false, 00:17:52.624 "zone_management": false, 00:17:52.624 "zone_append": false, 00:17:52.624 "compare": true, 00:17:52.624 "compare_and_write": false, 00:17:52.624 "abort": true, 00:17:52.624 "seek_hole": false, 00:17:52.624 "seek_data": false, 00:17:52.624 "copy": true, 00:17:52.624 "nvme_iov_md": false 00:17:52.624 }, 00:17:52.624 "driver_specific": { 00:17:52.624 "nvme": [ 00:17:52.624 { 00:17:52.624 "pci_address": "0000:00:11.0", 00:17:52.624 "trid": { 00:17:52.624 "trtype": "PCIe", 00:17:52.624 "traddr": "0000:00:11.0" 00:17:52.624 }, 00:17:52.624 "ctrlr_data": { 00:17:52.624 "cntlid": 0, 00:17:52.624 "vendor_id": "0x1b36", 00:17:52.624 "model_number": "QEMU NVMe Ctrl", 00:17:52.624 "serial_number": "12341", 00:17:52.624 "firmware_revision": "8.0.0", 00:17:52.624 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:52.624 "oacs": { 00:17:52.624 "security": 0, 00:17:52.624 "format": 1, 00:17:52.624 "firmware": 0, 00:17:52.624 "ns_manage": 1 00:17:52.624 }, 00:17:52.624 "multi_ctrlr": false, 00:17:52.624 "ana_reporting": false 00:17:52.624 }, 00:17:52.624 "vs": { 00:17:52.624 "nvme_version": "1.4" 00:17:52.624 }, 00:17:52.624 "ns_data": { 00:17:52.624 "id": 1, 00:17:52.624 "can_share": false 00:17:52.624 } 00:17:52.624 } 00:17:52.624 ], 00:17:52.624 "mp_policy": "active_passive" 00:17:52.624 } 00:17:52.624 } 00:17:52.624 ]' 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:52.624 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:52.882 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:52.882 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:52.882 09:46:20 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ce3639a5-1101-482b-a165-1030bb736d6f 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:52.882 09:46:20 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce3639a5-1101-482b-a165-1030bb736d6f 00:17:53.140 09:46:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:53.399 09:46:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fe91d909-f58f-4630-9ee5-3a563cd5c40a 00:17:53.399 09:46:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fe91d909-f58f-4630-9ee5-3a563cd5c40a 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:53.657 09:46:21 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:53.657 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:53.657 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:53.657 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:53.657 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:53.657 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:53.916 { 00:17:53.916 "name": "9b7fd91b-da64-406d-9e41-5542f79f22d5", 00:17:53.916 "aliases": [ 00:17:53.916 "lvs/nvme0n1p0" 00:17:53.916 ], 00:17:53.916 "product_name": "Logical Volume", 00:17:53.916 "block_size": 4096, 00:17:53.916 "num_blocks": 26476544, 00:17:53.916 "uuid": "9b7fd91b-da64-406d-9e41-5542f79f22d5", 00:17:53.916 "assigned_rate_limits": { 00:17:53.916 "rw_ios_per_sec": 0, 00:17:53.916 "rw_mbytes_per_sec": 0, 00:17:53.916 "r_mbytes_per_sec": 0, 00:17:53.916 "w_mbytes_per_sec": 0 00:17:53.916 }, 00:17:53.916 "claimed": false, 00:17:53.916 "zoned": false, 00:17:53.916 "supported_io_types": { 00:17:53.916 "read": true, 00:17:53.916 "write": true, 00:17:53.916 "unmap": true, 00:17:53.916 "flush": false, 00:17:53.916 "reset": true, 00:17:53.916 "nvme_admin": false, 00:17:53.916 "nvme_io": false, 00:17:53.916 "nvme_io_md": false, 00:17:53.916 "write_zeroes": true, 00:17:53.916 "zcopy": false, 00:17:53.916 "get_zone_info": false, 00:17:53.916 "zone_management": false, 00:17:53.916 "zone_append": false, 00:17:53.916 "compare": false, 00:17:53.916 "compare_and_write": false, 00:17:53.916 "abort": false, 00:17:53.916 "seek_hole": true, 00:17:53.916 "seek_data": true, 00:17:53.916 "copy": false, 00:17:53.916 "nvme_iov_md": false 00:17:53.916 }, 00:17:53.916 "driver_specific": { 00:17:53.916 "lvol": { 00:17:53.916 "lvol_store_uuid": "fe91d909-f58f-4630-9ee5-3a563cd5c40a", 00:17:53.916 "base_bdev": "nvme0n1", 00:17:53.916 "thin_provision": true, 00:17:53.916 "num_allocated_clusters": 0, 00:17:53.916 "snapshot": false, 00:17:53.916 "clone": false, 00:17:53.916 "esnap_clone": false 00:17:53.916 } 00:17:53.916 } 00:17:53.916 } 00:17:53.916 ]' 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:53.916 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:53.916 09:46:21 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:53.916 09:46:21 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:53.916 09:46:21 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:54.175 09:46:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.175 09:46:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.175 09:46:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:54.175 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:54.175 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:54.175 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:54.175 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:54.175 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:54.434 { 00:17:54.434 "name": "9b7fd91b-da64-406d-9e41-5542f79f22d5", 00:17:54.434 "aliases": [ 00:17:54.434 "lvs/nvme0n1p0" 00:17:54.434 ], 00:17:54.434 "product_name": "Logical Volume", 00:17:54.434 "block_size": 4096, 00:17:54.434 "num_blocks": 26476544, 00:17:54.434 "uuid": "9b7fd91b-da64-406d-9e41-5542f79f22d5", 00:17:54.434 "assigned_rate_limits": { 00:17:54.434 "rw_ios_per_sec": 0, 00:17:54.434 "rw_mbytes_per_sec": 0, 00:17:54.434 "r_mbytes_per_sec": 0, 00:17:54.434 "w_mbytes_per_sec": 0 00:17:54.434 }, 00:17:54.434 "claimed": false, 00:17:54.434 "zoned": false, 00:17:54.434 "supported_io_types": { 00:17:54.434 "read": true, 00:17:54.434 "write": true, 00:17:54.434 "unmap": true, 00:17:54.434 "flush": false, 00:17:54.434 "reset": true, 00:17:54.434 "nvme_admin": false, 00:17:54.434 "nvme_io": false, 00:17:54.434 "nvme_io_md": false, 00:17:54.434 "write_zeroes": true, 00:17:54.434 "zcopy": false, 00:17:54.434 "get_zone_info": false, 00:17:54.434 "zone_management": false, 00:17:54.434 "zone_append": false, 00:17:54.434 "compare": false, 00:17:54.434 "compare_and_write": false, 00:17:54.434 "abort": false, 00:17:54.434 "seek_hole": true, 00:17:54.434 "seek_data": true, 00:17:54.434 "copy": false, 00:17:54.434 "nvme_iov_md": false 00:17:54.434 }, 00:17:54.434 "driver_specific": { 00:17:54.434 "lvol": { 00:17:54.434 "lvol_store_uuid": "fe91d909-f58f-4630-9ee5-3a563cd5c40a", 00:17:54.434 "base_bdev": "nvme0n1", 00:17:54.434 "thin_provision": true, 00:17:54.434 "num_allocated_clusters": 0, 00:17:54.434 "snapshot": false, 00:17:54.434 "clone": false, 00:17:54.434 "esnap_clone": false 00:17:54.434 } 00:17:54.434 } 00:17:54.434 } 00:17:54.434 ]' 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:54.434 09:46:21 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:54.434 09:46:21 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.434 09:46:21 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:54.693 09:46:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:54.693 09:46:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:54.693 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:54.693 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:54.693 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:17:54.693 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:17:54.693 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9b7fd91b-da64-406d-9e41-5542f79f22d5 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:54.981 { 00:17:54.981 "name": "9b7fd91b-da64-406d-9e41-5542f79f22d5", 00:17:54.981 "aliases": [ 00:17:54.981 "lvs/nvme0n1p0" 00:17:54.981 ], 00:17:54.981 "product_name": "Logical Volume", 00:17:54.981 "block_size": 4096, 00:17:54.981 "num_blocks": 26476544, 00:17:54.981 "uuid": "9b7fd91b-da64-406d-9e41-5542f79f22d5", 00:17:54.981 "assigned_rate_limits": { 00:17:54.981 "rw_ios_per_sec": 0, 00:17:54.981 "rw_mbytes_per_sec": 0, 00:17:54.981 "r_mbytes_per_sec": 0, 00:17:54.981 "w_mbytes_per_sec": 0 00:17:54.981 }, 00:17:54.981 "claimed": false, 00:17:54.981 "zoned": false, 00:17:54.981 "supported_io_types": { 00:17:54.981 "read": true, 00:17:54.981 "write": true, 00:17:54.981 "unmap": true, 00:17:54.981 "flush": false, 00:17:54.981 "reset": true, 00:17:54.981 "nvme_admin": false, 00:17:54.981 "nvme_io": false, 00:17:54.981 "nvme_io_md": false, 00:17:54.981 "write_zeroes": true, 00:17:54.981 "zcopy": false, 00:17:54.981 "get_zone_info": false, 00:17:54.981 "zone_management": false, 00:17:54.981 "zone_append": false, 00:17:54.981 "compare": false, 00:17:54.981 "compare_and_write": false, 00:17:54.981 "abort": false, 00:17:54.981 "seek_hole": true, 00:17:54.981 "seek_data": true, 00:17:54.981 "copy": false, 00:17:54.981 "nvme_iov_md": false 00:17:54.981 }, 00:17:54.981 "driver_specific": { 00:17:54.981 "lvol": { 00:17:54.981 "lvol_store_uuid": "fe91d909-f58f-4630-9ee5-3a563cd5c40a", 00:17:54.981 "base_bdev": "nvme0n1", 00:17:54.981 "thin_provision": true, 00:17:54.981 "num_allocated_clusters": 0, 00:17:54.981 "snapshot": false, 00:17:54.981 "clone": false, 00:17:54.981 "esnap_clone": false 00:17:54.981 } 00:17:54.981 } 00:17:54.981 } 00:17:54.981 ]' 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:54.981 09:46:22 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9b7fd91b-da64-406d-9e41-5542f79f22d5 --l2p_dram_limit 10' 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:54.981 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:54.981 09:46:22 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9b7fd91b-da64-406d-9e41-5542f79f22d5 --l2p_dram_limit 10 -c nvc0n1p0 00:17:54.981 [2024-11-07 09:46:22.586706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.586763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.981 [2024-11-07 09:46:22.586780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.981 [2024-11-07 09:46:22.586789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.586843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.586853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.981 [2024-11-07 09:46:22.586863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:54.981 [2024-11-07 09:46:22.586871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.586896] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.981 [2024-11-07 09:46:22.587743] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.981 [2024-11-07 09:46:22.587775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.587782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.981 [2024-11-07 09:46:22.587793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:17:54.981 [2024-11-07 09:46:22.587800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.587871] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 60039578-222b-4b90-a79a-9095c30dd114 00:17:54.981 [2024-11-07 09:46:22.588946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.588981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:54.981 [2024-11-07 09:46:22.588991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:54.981 [2024-11-07 09:46:22.589000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.594535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.594696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.981 [2024-11-07 09:46:22.594714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.485 ms 00:17:54.981 [2024-11-07 09:46:22.594724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.594809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.594821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.981 [2024-11-07 09:46:22.594829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:54.981 [2024-11-07 09:46:22.594840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.594882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.594893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.981 [2024-11-07 09:46:22.594901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:54.981 [2024-11-07 09:46:22.594912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.594932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.981 [2024-11-07 09:46:22.598523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.598642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.981 [2024-11-07 09:46:22.598660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:17:54.981 [2024-11-07 09:46:22.598668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.598703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.598711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.981 [2024-11-07 09:46:22.598721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.981 [2024-11-07 09:46:22.598728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.598755] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:54.981 [2024-11-07 09:46:22.598889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.981 [2024-11-07 09:46:22.598904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.981 [2024-11-07 09:46:22.598915] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.981 [2024-11-07 09:46:22.598927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.981 [2024-11-07 09:46:22.598935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.981 [2024-11-07 09:46:22.598945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.981 [2024-11-07 09:46:22.598952] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.981 [2024-11-07 09:46:22.598962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.981 [2024-11-07 09:46:22.598969] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.981 [2024-11-07 09:46:22.598978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.598985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.981 [2024-11-07 09:46:22.598994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:17:54.981 [2024-11-07 09:46:22.599008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.599108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.981 [2024-11-07 09:46:22.599117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.981 [2024-11-07 09:46:22.599126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:54.981 [2024-11-07 09:46:22.599133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.981 [2024-11-07 09:46:22.599252] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.981 [2024-11-07 09:46:22.599262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.981 [2024-11-07 09:46:22.599271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.981 [2024-11-07 09:46:22.599279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.981 [2024-11-07 09:46:22.599289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.981 [2024-11-07 09:46:22.599296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.981 [2024-11-07 09:46:22.599304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.981 [2024-11-07 09:46:22.599311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.982 [2024-11-07 09:46:22.599320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.982 [2024-11-07 09:46:22.599334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.982 [2024-11-07 09:46:22.599341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.982 [2024-11-07 09:46:22.599350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.982 [2024-11-07 09:46:22.599356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.982 [2024-11-07 09:46:22.599365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:54.982 [2024-11-07 09:46:22.599371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.982 [2024-11-07 09:46:22.599389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.982 [2024-11-07 09:46:22.599412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.982 [2024-11-07 09:46:22.599435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.982 [2024-11-07 09:46:22.599457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.982 [2024-11-07 09:46:22.599479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.982 [2024-11-07 09:46:22.599503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.982 [2024-11-07 09:46:22.599517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.982 [2024-11-07 09:46:22.599524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:54.982 [2024-11-07 09:46:22.599531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.982 [2024-11-07 09:46:22.599538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.982 [2024-11-07 09:46:22.599546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:54.982 [2024-11-07 09:46:22.599553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.982 [2024-11-07 09:46:22.599567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:54.982 [2024-11-07 09:46:22.599574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599581] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.982 [2024-11-07 09:46:22.599591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.982 [2024-11-07 09:46:22.599598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.982 [2024-11-07 09:46:22.599614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.982 [2024-11-07 09:46:22.599624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.982 [2024-11-07 09:46:22.599647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.982 [2024-11-07 09:46:22.599655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.982 [2024-11-07 09:46:22.599662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.982 [2024-11-07 09:46:22.599670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.982 [2024-11-07 09:46:22.599682] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.982 [2024-11-07 09:46:22.599693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.982 [2024-11-07 09:46:22.599713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:54.982 [2024-11-07 09:46:22.599720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:54.982 [2024-11-07 09:46:22.599729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:54.982 [2024-11-07 09:46:22.599737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:54.982 [2024-11-07 09:46:22.599746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:54.982 [2024-11-07 09:46:22.599753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:54.982 [2024-11-07 09:46:22.599761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:54.982 [2024-11-07 09:46:22.599768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:54.982 [2024-11-07 09:46:22.599778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:54.982 [2024-11-07 09:46:22.599818] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.982 [2024-11-07 09:46:22.599828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.982 [2024-11-07 09:46:22.599844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.982 [2024-11-07 09:46:22.599851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.982 [2024-11-07 09:46:22.599860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.982 [2024-11-07 09:46:22.599867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.982 [2024-11-07 09:46:22.599875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.982 [2024-11-07 09:46:22.599883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:17:54.982 [2024-11-07 09:46:22.599891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.982 [2024-11-07 09:46:22.599929] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:54.982 [2024-11-07 09:46:22.599941] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:57.513 [2024-11-07 09:46:24.805439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.805704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:57.513 [2024-11-07 09:46:24.805732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2205.500 ms 00:17:57.513 [2024-11-07 09:46:24.805746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.831424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.831478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.513 [2024-11-07 09:46:24.831496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.396 ms 00:17:57.513 [2024-11-07 09:46:24.831510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.831703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.831727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:57.513 [2024-11-07 09:46:24.831742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:57.513 [2024-11-07 09:46:24.831759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.862311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.862356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.513 [2024-11-07 09:46:24.862373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.493 ms 00:17:57.513 [2024-11-07 09:46:24.862388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.862433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.862450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.513 [2024-11-07 09:46:24.862462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.513 [2024-11-07 09:46:24.862475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.862924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.862954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.513 [2024-11-07 09:46:24.862968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:57.513 [2024-11-07 09:46:24.862981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.863137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.863164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.513 [2024-11-07 09:46:24.863181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:57.513 [2024-11-07 09:46:24.863198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.877163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.877312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.513 [2024-11-07 09:46:24.877333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.938 ms 00:17:57.513 [2024-11-07 09:46:24.877347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.888621] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:57.513 [2024-11-07 09:46:24.891300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.891332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:57.513 [2024-11-07 09:46:24.891349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.839 ms 00:17:57.513 [2024-11-07 09:46:24.891361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.961543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.961599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:57.513 [2024-11-07 09:46:24.961616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.140 ms 00:17:57.513 [2024-11-07 09:46:24.961625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.961817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.961830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.513 [2024-11-07 09:46:24.961842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:17:57.513 [2024-11-07 09:46:24.961850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:24.985096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:24.985240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:57.513 [2024-11-07 09:46:24.985262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.200 ms 00:17:57.513 [2024-11-07 09:46:24.985271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.007623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.007664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:57.513 [2024-11-07 09:46:25.007689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.321 ms 00:17:57.513 [2024-11-07 09:46:25.007697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.008239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.008259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.513 [2024-11-07 09:46:25.008269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:17:57.513 [2024-11-07 09:46:25.008277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.074962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.075136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:57.513 [2024-11-07 09:46:25.075160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.647 ms 00:17:57.513 [2024-11-07 09:46:25.075169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.099006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.099044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:57.513 [2024-11-07 09:46:25.099059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.765 ms 00:17:57.513 [2024-11-07 09:46:25.099066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.121929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.122084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:57.513 [2024-11-07 09:46:25.122105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.824 ms 00:17:57.513 [2024-11-07 09:46:25.122113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.145747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.145786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.513 [2024-11-07 09:46:25.145800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.598 ms 00:17:57.513 [2024-11-07 09:46:25.145808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.145848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.145858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.513 [2024-11-07 09:46:25.145870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:57.513 [2024-11-07 09:46:25.145878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.145954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.513 [2024-11-07 09:46:25.145963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.513 [2024-11-07 09:46:25.145974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:57.513 [2024-11-07 09:46:25.145982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.513 [2024-11-07 09:46:25.146811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2559.678 ms, result 0 00:17:57.513 { 00:17:57.513 "name": "ftl0", 00:17:57.514 "uuid": "60039578-222b-4b90-a79a-9095c30dd114" 00:17:57.514 } 00:17:57.514 09:46:25 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:57.514 09:46:25 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:57.772 09:46:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:57.772 09:46:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:58.031 [2024-11-07 09:46:25.522425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.031 [2024-11-07 09:46:25.522478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.031 [2024-11-07 09:46:25.522491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.032 [2024-11-07 09:46:25.522505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.522529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.032 [2024-11-07 09:46:25.525181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.525322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.032 [2024-11-07 09:46:25.525342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:17:58.032 [2024-11-07 09:46:25.525350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.525613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.525622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.032 [2024-11-07 09:46:25.525653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:17:58.032 [2024-11-07 09:46:25.525660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.528889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.528909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.032 [2024-11-07 09:46:25.528921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:17:58.032 [2024-11-07 09:46:25.528929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.535077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.535105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:58.032 [2024-11-07 09:46:25.535120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:17:58.032 [2024-11-07 09:46:25.535127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.558681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.558719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:58.032 [2024-11-07 09:46:25.558732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.482 ms 00:17:58.032 [2024-11-07 09:46:25.558739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.573626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.573672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:58.032 [2024-11-07 09:46:25.573686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.841 ms 00:17:58.032 [2024-11-07 09:46:25.573694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.573844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.573855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:58.032 [2024-11-07 09:46:25.573865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:58.032 [2024-11-07 09:46:25.573872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.596574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.596718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:58.032 [2024-11-07 09:46:25.596737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.682 ms 00:17:58.032 [2024-11-07 09:46:25.596745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.619493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.619525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:58.032 [2024-11-07 09:46:25.619537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:17:58.032 [2024-11-07 09:46:25.619545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.641705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.641736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:58.032 [2024-11-07 09:46:25.641749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.120 ms 00:17:58.032 [2024-11-07 09:46:25.641756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.664009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.032 [2024-11-07 09:46:25.664149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:58.032 [2024-11-07 09:46:25.664169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.176 ms 00:17:58.032 [2024-11-07 09:46:25.664176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.032 [2024-11-07 09:46:25.664212] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:58.032 [2024-11-07 09:46:25.664226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:58.032 [2024-11-07 09:46:25.664768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.664996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:58.033 [2024-11-07 09:46:25.665205] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:58.033 [2024-11-07 09:46:25.665216] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 60039578-222b-4b90-a79a-9095c30dd114 00:17:58.033 [2024-11-07 09:46:25.665225] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:58.033 [2024-11-07 09:46:25.665234] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:58.033 [2024-11-07 09:46:25.665242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:58.033 [2024-11-07 09:46:25.665253] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:58.033 [2024-11-07 09:46:25.665260] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:58.033 [2024-11-07 09:46:25.665269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:58.033 [2024-11-07 09:46:25.665276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:58.033 [2024-11-07 09:46:25.665284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:58.033 [2024-11-07 09:46:25.665290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:58.033 [2024-11-07 09:46:25.665299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.033 [2024-11-07 09:46:25.665306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:58.033 [2024-11-07 09:46:25.665320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:17:58.033 [2024-11-07 09:46:25.665327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.033 [2024-11-07 09:46:25.677576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.033 [2024-11-07 09:46:25.677606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:58.033 [2024-11-07 09:46:25.677618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.211 ms 00:17:58.033 [2024-11-07 09:46:25.677645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.033 [2024-11-07 09:46:25.678022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.033 [2024-11-07 09:46:25.678039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:58.033 [2024-11-07 09:46:25.678050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:17:58.033 [2024-11-07 09:46:25.678059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.719656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.719800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.292 [2024-11-07 09:46:25.719859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.719903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.719990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.720062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.292 [2024-11-07 09:46:25.720109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.720134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.720233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.720260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.292 [2024-11-07 09:46:25.720281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.720331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.720368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.720465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.292 [2024-11-07 09:46:25.720491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.720510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.798014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.798190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.292 [2024-11-07 09:46:25.798245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.798267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.860848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.861028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.292 [2024-11-07 09:46:25.861079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.861104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.861203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.861227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.292 [2024-11-07 09:46:25.861249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.861268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.861392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.861419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.292 [2024-11-07 09:46:25.861440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.861460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.861564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.861648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.292 [2024-11-07 09:46:25.861672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.861691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.861782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.861807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.292 [2024-11-07 09:46:25.861830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.861911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.861993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.862017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.292 [2024-11-07 09:46:25.862037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.862056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.862110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.292 [2024-11-07 09:46:25.862163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.292 [2024-11-07 09:46:25.862217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.292 [2024-11-07 09:46:25.862236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.292 [2024-11-07 09:46:25.862371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.914 ms, result 0 00:17:58.292 true 00:17:58.292 09:46:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74362 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74362 ']' 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74362 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74362 00:17:58.292 killing process with pid 74362 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74362' 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74362 00:17:58.292 09:46:25 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74362 00:18:04.864 09:46:31 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:08.153 262144+0 records in 00:18:08.153 262144+0 records out 00:18:08.153 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.73842 s, 287 MB/s 00:18:08.153 09:46:35 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:10.056 09:46:37 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.056 [2024-11-07 09:46:37.702244] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:10.056 [2024-11-07 09:46:37.702344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74570 ] 00:18:10.317 [2024-11-07 09:46:37.850617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.317 [2024-11-07 09:46:37.952260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.575 [2024-11-07 09:46:38.206212] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.575 [2024-11-07 09:46:38.206271] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.834 [2024-11-07 09:46:38.358959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.834 [2024-11-07 09:46:38.359018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.834 [2024-11-07 09:46:38.359036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.835 [2024-11-07 09:46:38.359044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.359094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.359104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.835 [2024-11-07 09:46:38.359114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:10.835 [2024-11-07 09:46:38.359122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.359140] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.835 [2024-11-07 09:46:38.359842] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.835 [2024-11-07 09:46:38.359861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.359868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.835 [2024-11-07 09:46:38.359877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:18:10.835 [2024-11-07 09:46:38.359885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.361047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.835 [2024-11-07 09:46:38.373279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.373314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.835 [2024-11-07 09:46:38.373327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.234 ms 00:18:10.835 [2024-11-07 09:46:38.373335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.373394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.373403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.835 [2024-11-07 09:46:38.373411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:10.835 [2024-11-07 09:46:38.373418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.378340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.378497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.835 [2024-11-07 09:46:38.378512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.863 ms 00:18:10.835 [2024-11-07 09:46:38.378520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.378603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.378618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.835 [2024-11-07 09:46:38.378642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:10.835 [2024-11-07 09:46:38.378653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.378703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.378717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.835 [2024-11-07 09:46:38.378725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:10.835 [2024-11-07 09:46:38.378736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.378759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.835 [2024-11-07 09:46:38.382125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.382151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.835 [2024-11-07 09:46:38.382161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:18:10.835 [2024-11-07 09:46:38.382170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.382204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.382211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.835 [2024-11-07 09:46:38.382219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:10.835 [2024-11-07 09:46:38.382227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.382246] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:10.835 [2024-11-07 09:46:38.382263] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:10.835 [2024-11-07 09:46:38.382297] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:10.835 [2024-11-07 09:46:38.382314] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:10.835 [2024-11-07 09:46:38.382416] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.835 [2024-11-07 09:46:38.382426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.835 [2024-11-07 09:46:38.382436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.835 [2024-11-07 09:46:38.382446] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382455] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:10.835 [2024-11-07 09:46:38.382470] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.835 [2024-11-07 09:46:38.382477] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.835 [2024-11-07 09:46:38.382484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.835 [2024-11-07 09:46:38.382494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.382501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.835 [2024-11-07 09:46:38.382508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:10.835 [2024-11-07 09:46:38.382515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.382597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.835 [2024-11-07 09:46:38.382605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.835 [2024-11-07 09:46:38.382612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:10.835 [2024-11-07 09:46:38.382619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.835 [2024-11-07 09:46:38.382749] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.835 [2024-11-07 09:46:38.382762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.835 [2024-11-07 09:46:38.382770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.835 [2024-11-07 09:46:38.382792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.835 [2024-11-07 09:46:38.382814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.835 [2024-11-07 09:46:38.382827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.835 [2024-11-07 09:46:38.382834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:10.835 [2024-11-07 09:46:38.382840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.835 [2024-11-07 09:46:38.382847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.835 [2024-11-07 09:46:38.382854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:10.835 [2024-11-07 09:46:38.382866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.835 [2024-11-07 09:46:38.382879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.835 [2024-11-07 09:46:38.382900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.835 [2024-11-07 09:46:38.382919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.835 [2024-11-07 09:46:38.382937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.835 [2024-11-07 09:46:38.382956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.835 [2024-11-07 09:46:38.382968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.835 [2024-11-07 09:46:38.382974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:10.835 [2024-11-07 09:46:38.382981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.835 [2024-11-07 09:46:38.382987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.835 [2024-11-07 09:46:38.382993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:10.836 [2024-11-07 09:46:38.382999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.836 [2024-11-07 09:46:38.383005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.836 [2024-11-07 09:46:38.383012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:10.836 [2024-11-07 09:46:38.383018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.836 [2024-11-07 09:46:38.383024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.836 [2024-11-07 09:46:38.383031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:10.836 [2024-11-07 09:46:38.383037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.836 [2024-11-07 09:46:38.383044] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.836 [2024-11-07 09:46:38.383051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.836 [2024-11-07 09:46:38.383058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.836 [2024-11-07 09:46:38.383065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.836 [2024-11-07 09:46:38.383072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.836 [2024-11-07 09:46:38.383079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.836 [2024-11-07 09:46:38.383086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.836 [2024-11-07 09:46:38.383093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.836 [2024-11-07 09:46:38.383099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.836 [2024-11-07 09:46:38.383105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.836 [2024-11-07 09:46:38.383113] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.836 [2024-11-07 09:46:38.383122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:10.836 [2024-11-07 09:46:38.383138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:10.836 [2024-11-07 09:46:38.383144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:10.836 [2024-11-07 09:46:38.383151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:10.836 [2024-11-07 09:46:38.383158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:10.836 [2024-11-07 09:46:38.383165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:10.836 [2024-11-07 09:46:38.383172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:10.836 [2024-11-07 09:46:38.383179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:10.836 [2024-11-07 09:46:38.383185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:10.836 [2024-11-07 09:46:38.383192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:10.836 [2024-11-07 09:46:38.383226] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.836 [2024-11-07 09:46:38.383236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.836 [2024-11-07 09:46:38.383260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.836 [2024-11-07 09:46:38.383267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.836 [2024-11-07 09:46:38.383275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.836 [2024-11-07 09:46:38.383282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.383289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.836 [2024-11-07 09:46:38.383297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:18:10.836 [2024-11-07 09:46:38.383304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.409033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.409249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.836 [2024-11-07 09:46:38.409267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:18:10.836 [2024-11-07 09:46:38.409275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.409376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.409385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.836 [2024-11-07 09:46:38.409393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:10.836 [2024-11-07 09:46:38.409400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.455321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.455376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.836 [2024-11-07 09:46:38.455390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.857 ms 00:18:10.836 [2024-11-07 09:46:38.455398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.455454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.455463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.836 [2024-11-07 09:46:38.455472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.836 [2024-11-07 09:46:38.455483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.455886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.455903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.836 [2024-11-07 09:46:38.455912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:18:10.836 [2024-11-07 09:46:38.455919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.456050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.456060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.836 [2024-11-07 09:46:38.456067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:10.836 [2024-11-07 09:46:38.456077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.468998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.469037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.836 [2024-11-07 09:46:38.469050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.902 ms 00:18:10.836 [2024-11-07 09:46:38.469058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.836 [2024-11-07 09:46:38.481460] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:10.836 [2024-11-07 09:46:38.481503] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:10.836 [2024-11-07 09:46:38.481515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.836 [2024-11-07 09:46:38.481523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:10.836 [2024-11-07 09:46:38.481532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.352 ms 00:18:10.836 [2024-11-07 09:46:38.481540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.506079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.506137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:11.095 [2024-11-07 09:46:38.506158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.491 ms 00:18:11.095 [2024-11-07 09:46:38.506166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.518116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.518166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:11.095 [2024-11-07 09:46:38.518178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.895 ms 00:18:11.095 [2024-11-07 09:46:38.518185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.529295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.529456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:11.095 [2024-11-07 09:46:38.529474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.070 ms 00:18:11.095 [2024-11-07 09:46:38.529481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.530115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.530134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.095 [2024-11-07 09:46:38.530143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:18:11.095 [2024-11-07 09:46:38.530150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.584652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.584711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:11.095 [2024-11-07 09:46:38.584724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.481 ms 00:18:11.095 [2024-11-07 09:46:38.584737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.595539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:11.095 [2024-11-07 09:46:38.598239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.598274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.095 [2024-11-07 09:46:38.598287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.449 ms 00:18:11.095 [2024-11-07 09:46:38.598295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.598402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.598413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:11.095 [2024-11-07 09:46:38.598422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:11.095 [2024-11-07 09:46:38.598429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.598496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.598507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.095 [2024-11-07 09:46:38.598515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:11.095 [2024-11-07 09:46:38.598522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.598540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.598549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.095 [2024-11-07 09:46:38.598556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.095 [2024-11-07 09:46:38.598564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.598595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:11.095 [2024-11-07 09:46:38.598606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.598615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:11.095 [2024-11-07 09:46:38.598622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:11.095 [2024-11-07 09:46:38.598648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.622149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.622197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.095 [2024-11-07 09:46:38.622209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.481 ms 00:18:11.095 [2024-11-07 09:46:38.622217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.622295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.095 [2024-11-07 09:46:38.622305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.095 [2024-11-07 09:46:38.622313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:11.095 [2024-11-07 09:46:38.622320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.095 [2024-11-07 09:46:38.623279] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.909 ms, result 0 00:18:12.028  [2024-11-07T09:46:41.072Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-07T09:46:42.006Z] Copying: 88/1024 [MB] (43 MBps) [2024-11-07T09:46:42.939Z] Copying: 134/1024 [MB] (46 MBps) [2024-11-07T09:46:43.925Z] Copying: 173/1024 [MB] (39 MBps) [2024-11-07T09:46:44.862Z] Copying: 220/1024 [MB] (46 MBps) [2024-11-07T09:46:45.800Z] Copying: 260/1024 [MB] (40 MBps) [2024-11-07T09:46:46.735Z] Copying: 290/1024 [MB] (29 MBps) [2024-11-07T09:46:47.669Z] Copying: 330/1024 [MB] (40 MBps) [2024-11-07T09:46:49.042Z] Copying: 371/1024 [MB] (41 MBps) [2024-11-07T09:46:49.974Z] Copying: 416/1024 [MB] (44 MBps) [2024-11-07T09:46:50.914Z] Copying: 460/1024 [MB] (44 MBps) [2024-11-07T09:46:51.857Z] Copying: 496/1024 [MB] (36 MBps) [2024-11-07T09:46:52.799Z] Copying: 519/1024 [MB] (22 MBps) [2024-11-07T09:46:53.734Z] Copying: 533/1024 [MB] (14 MBps) [2024-11-07T09:46:54.669Z] Copying: 567/1024 [MB] (34 MBps) [2024-11-07T09:46:56.054Z] Copying: 608/1024 [MB] (40 MBps) [2024-11-07T09:46:56.998Z] Copying: 630/1024 [MB] (22 MBps) [2024-11-07T09:46:57.938Z] Copying: 648/1024 [MB] (18 MBps) [2024-11-07T09:46:58.940Z] Copying: 668/1024 [MB] (20 MBps) [2024-11-07T09:46:59.881Z] Copying: 690/1024 [MB] (21 MBps) [2024-11-07T09:47:00.823Z] Copying: 711/1024 [MB] (20 MBps) [2024-11-07T09:47:01.761Z] Copying: 734/1024 [MB] (23 MBps) [2024-11-07T09:47:02.704Z] Copying: 762/1024 [MB] (27 MBps) [2024-11-07T09:47:03.648Z] Copying: 783/1024 [MB] (21 MBps) [2024-11-07T09:47:05.024Z] Copying: 803/1024 [MB] (19 MBps) [2024-11-07T09:47:05.963Z] Copying: 837/1024 [MB] (33 MBps) [2024-11-07T09:47:06.899Z] Copying: 883/1024 [MB] (46 MBps) [2024-11-07T09:47:07.842Z] Copying: 930/1024 [MB] (46 MBps) [2024-11-07T09:47:08.780Z] Copying: 961/1024 [MB] (31 MBps) [2024-11-07T09:47:09.724Z] Copying: 1000/1024 [MB] (39 MBps) [2024-11-07T09:47:09.724Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-07 09:47:09.617939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.617991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:42.053 [2024-11-07 09:47:09.618004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:42.053 [2024-11-07 09:47:09.618013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.618033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:42.053 [2024-11-07 09:47:09.620672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.620804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:42.053 [2024-11-07 09:47:09.620820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:18:42.053 [2024-11-07 09:47:09.620828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.622713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.622743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:42.053 [2024-11-07 09:47:09.622752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:18:42.053 [2024-11-07 09:47:09.622759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.639650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.639770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:42.053 [2024-11-07 09:47:09.639785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:18:42.053 [2024-11-07 09:47:09.639793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.645937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.645968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:42.053 [2024-11-07 09:47:09.645977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:18:42.053 [2024-11-07 09:47:09.645984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.669934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.669966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:42.053 [2024-11-07 09:47:09.669976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.902 ms 00:18:42.053 [2024-11-07 09:47:09.669983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.684055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.684097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:42.053 [2024-11-07 09:47:09.684108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.039 ms 00:18:42.053 [2024-11-07 09:47:09.684115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.684235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.684244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:42.053 [2024-11-07 09:47:09.684258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:42.053 [2024-11-07 09:47:09.684265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.053 [2024-11-07 09:47:09.708185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.053 [2024-11-07 09:47:09.708216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:42.053 [2024-11-07 09:47:09.708227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.905 ms 00:18:42.053 [2024-11-07 09:47:09.708235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.315 [2024-11-07 09:47:09.731237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.315 [2024-11-07 09:47:09.731389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:42.315 [2024-11-07 09:47:09.731414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.970 ms 00:18:42.315 [2024-11-07 09:47:09.731421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.315 [2024-11-07 09:47:09.754097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.315 [2024-11-07 09:47:09.754214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:42.315 [2024-11-07 09:47:09.754229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:18:42.315 [2024-11-07 09:47:09.754236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.315 [2024-11-07 09:47:09.777144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.315 [2024-11-07 09:47:09.777257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:42.315 [2024-11-07 09:47:09.777272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.859 ms 00:18:42.315 [2024-11-07 09:47:09.777279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.315 [2024-11-07 09:47:09.777305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:42.315 [2024-11-07 09:47:09.777319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:42.315 [2024-11-07 09:47:09.777625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.777999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:42.316 [2024-11-07 09:47:09.778108] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:42.316 [2024-11-07 09:47:09.778119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 60039578-222b-4b90-a79a-9095c30dd114 00:18:42.316 [2024-11-07 09:47:09.778127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:42.316 [2024-11-07 09:47:09.778136] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:42.316 [2024-11-07 09:47:09.778143] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:42.316 [2024-11-07 09:47:09.778150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:42.316 [2024-11-07 09:47:09.778158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:42.316 [2024-11-07 09:47:09.778165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:42.316 [2024-11-07 09:47:09.778172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:42.316 [2024-11-07 09:47:09.778184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:42.316 [2024-11-07 09:47:09.778190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:42.316 [2024-11-07 09:47:09.778197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.316 [2024-11-07 09:47:09.778205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:42.316 [2024-11-07 09:47:09.778212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:18:42.316 [2024-11-07 09:47:09.778219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.790600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.316 [2024-11-07 09:47:09.790640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:42.316 [2024-11-07 09:47:09.790651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.365 ms 00:18:42.316 [2024-11-07 09:47:09.790658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.791004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.316 [2024-11-07 09:47:09.791012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:42.316 [2024-11-07 09:47:09.791021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:42.316 [2024-11-07 09:47:09.791028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.823862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.823977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.316 [2024-11-07 09:47:09.823992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.824000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.824055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.824063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.316 [2024-11-07 09:47:09.824071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.824078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.824152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.824161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.316 [2024-11-07 09:47:09.824170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.824177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.824191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.824198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.316 [2024-11-07 09:47:09.824205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.824212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.901731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.901771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.316 [2024-11-07 09:47:09.901783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.901790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.316 [2024-11-07 09:47:09.965314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.316 [2024-11-07 09:47:09.965393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.316 [2024-11-07 09:47:09.965468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.316 [2024-11-07 09:47:09.965583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.316 [2024-11-07 09:47:09.965652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.316 [2024-11-07 09:47:09.965713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.316 [2024-11-07 09:47:09.965770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.316 [2024-11-07 09:47:09.965778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.316 [2024-11-07 09:47:09.965786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.316 [2024-11-07 09:47:09.965895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.926 ms, result 0 00:18:45.629 00:18:45.629 00:18:45.629 09:47:12 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:45.629 [2024-11-07 09:47:12.908706] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:45.629 [2024-11-07 09:47:12.908981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74921 ] 00:18:45.629 [2024-11-07 09:47:13.069939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.629 [2024-11-07 09:47:13.170979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.890 [2024-11-07 09:47:13.426298] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:45.890 [2024-11-07 09:47:13.426365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.153 [2024-11-07 09:47:13.583604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.583670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.153 [2024-11-07 09:47:13.583695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.153 [2024-11-07 09:47:13.583706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.583767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.583795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.153 [2024-11-07 09:47:13.583810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:46.153 [2024-11-07 09:47:13.583823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.583852] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.153 [2024-11-07 09:47:13.584686] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.153 [2024-11-07 09:47:13.584718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.584730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.153 [2024-11-07 09:47:13.584742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:18:46.153 [2024-11-07 09:47:13.584754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.585902] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:46.153 [2024-11-07 09:47:13.598842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.598971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:46.153 [2024-11-07 09:47:13.598994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.941 ms 00:18:46.153 [2024-11-07 09:47:13.599006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.599072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.599087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:46.153 [2024-11-07 09:47:13.599100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:46.153 [2024-11-07 09:47:13.599112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.604304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.604406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.153 [2024-11-07 09:47:13.604482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.095 ms 00:18:46.153 [2024-11-07 09:47:13.604520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.604647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.604732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.153 [2024-11-07 09:47:13.604767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:46.153 [2024-11-07 09:47:13.604800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.604919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.604967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.153 [2024-11-07 09:47:13.605142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:46.153 [2024-11-07 09:47:13.605180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.605237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.153 [2024-11-07 09:47:13.608613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.608732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.153 [2024-11-07 09:47:13.608808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:18:46.153 [2024-11-07 09:47:13.608851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.608921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.153 [2024-11-07 09:47:13.608964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.153 [2024-11-07 09:47:13.608999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:46.153 [2024-11-07 09:47:13.609094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.153 [2024-11-07 09:47:13.609167] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:46.153 [2024-11-07 09:47:13.609218] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:46.153 [2024-11-07 09:47:13.609310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:46.153 [2024-11-07 09:47:13.609427] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:46.153 [2024-11-07 09:47:13.609607] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.153 [2024-11-07 09:47:13.609665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.153 [2024-11-07 09:47:13.609719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:46.153 [2024-11-07 09:47:13.609777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.153 [2024-11-07 09:47:13.610017] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.153 [2024-11-07 09:47:13.610070] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:46.153 [2024-11-07 09:47:13.610145] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.153 [2024-11-07 09:47:13.610182] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.153 [2024-11-07 09:47:13.610214] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.153 [2024-11-07 09:47:13.610296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.610334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.154 [2024-11-07 09:47:13.610411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:18:46.154 [2024-11-07 09:47:13.610448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.154 [2024-11-07 09:47:13.610619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.610674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.154 [2024-11-07 09:47:13.610709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:46.154 [2024-11-07 09:47:13.610781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.154 [2024-11-07 09:47:13.610956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.154 [2024-11-07 09:47:13.611033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.154 [2024-11-07 09:47:13.611070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.154 [2024-11-07 09:47:13.611104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.611138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.154 [2024-11-07 09:47:13.611171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.611205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:46.154 [2024-11-07 09:47:13.611237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.154 [2024-11-07 09:47:13.611282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:46.154 [2024-11-07 09:47:13.611418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.154 [2024-11-07 09:47:13.611456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.154 [2024-11-07 09:47:13.611490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:46.154 [2024-11-07 09:47:13.611524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.154 [2024-11-07 09:47:13.611557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.154 [2024-11-07 09:47:13.611590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:46.154 [2024-11-07 09:47:13.611642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.611676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.154 [2024-11-07 09:47:13.611709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:46.154 [2024-11-07 09:47:13.611852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.611888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.154 [2024-11-07 09:47:13.611921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:46.154 [2024-11-07 09:47:13.611953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.154 [2024-11-07 09:47:13.611987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.154 [2024-11-07 09:47:13.612021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:46.154 [2024-11-07 09:47:13.612053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.154 [2024-11-07 09:47:13.612085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.154 [2024-11-07 09:47:13.612118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:46.154 [2024-11-07 09:47:13.612205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.154 [2024-11-07 09:47:13.612243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.154 [2024-11-07 09:47:13.612277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:46.154 [2024-11-07 09:47:13.612309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.154 [2024-11-07 09:47:13.612341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.154 [2024-11-07 09:47:13.612374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:46.154 [2024-11-07 09:47:13.612407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.154 [2024-11-07 09:47:13.612441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.154 [2024-11-07 09:47:13.612474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:46.154 [2024-11-07 09:47:13.612544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.154 [2024-11-07 09:47:13.612580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.154 [2024-11-07 09:47:13.612613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:46.154 [2024-11-07 09:47:13.612667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.613016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.154 [2024-11-07 09:47:13.613059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:46.154 [2024-11-07 09:47:13.613156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.613194] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.154 [2024-11-07 09:47:13.613229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.154 [2024-11-07 09:47:13.613682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.154 [2024-11-07 09:47:13.613698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.154 [2024-11-07 09:47:13.613712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.154 [2024-11-07 09:47:13.613725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.154 [2024-11-07 09:47:13.613736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.154 [2024-11-07 09:47:13.613748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.154 [2024-11-07 09:47:13.613759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.154 [2024-11-07 09:47:13.613770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.154 [2024-11-07 09:47:13.613784] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.154 [2024-11-07 09:47:13.613799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.613813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:46.154 [2024-11-07 09:47:13.613826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:46.154 [2024-11-07 09:47:13.613838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:46.154 [2024-11-07 09:47:13.613850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:46.154 [2024-11-07 09:47:13.613862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:46.154 [2024-11-07 09:47:13.613875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:46.154 [2024-11-07 09:47:13.613886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:46.154 [2024-11-07 09:47:13.613899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:46.154 [2024-11-07 09:47:13.613911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:46.154 [2024-11-07 09:47:13.613923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.613936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.613948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.613960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.613973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:46.154 [2024-11-07 09:47:13.613984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.154 [2024-11-07 09:47:13.614002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.614016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.154 [2024-11-07 09:47:13.614028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.154 [2024-11-07 09:47:13.614041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.154 [2024-11-07 09:47:13.614054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.154 [2024-11-07 09:47:13.614068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.614080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.154 [2024-11-07 09:47:13.614092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:18:46.154 [2024-11-07 09:47:13.614104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.154 [2024-11-07 09:47:13.640094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.640223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.154 [2024-11-07 09:47:13.640244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.904 ms 00:18:46.154 [2024-11-07 09:47:13.640256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.154 [2024-11-07 09:47:13.640371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.640386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.154 [2024-11-07 09:47:13.640400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:46.154 [2024-11-07 09:47:13.640412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.154 [2024-11-07 09:47:13.681355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.681396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.154 [2024-11-07 09:47:13.681412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.879 ms 00:18:46.154 [2024-11-07 09:47:13.681424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.154 [2024-11-07 09:47:13.681471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.154 [2024-11-07 09:47:13.681486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.154 [2024-11-07 09:47:13.681499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.155 [2024-11-07 09:47:13.681515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.681955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.681983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.155 [2024-11-07 09:47:13.681997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:18:46.155 [2024-11-07 09:47:13.682008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.682180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.682206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.155 [2024-11-07 09:47:13.682219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:18:46.155 [2024-11-07 09:47:13.682235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.695486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.695520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.155 [2024-11-07 09:47:13.695537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.222 ms 00:18:46.155 [2024-11-07 09:47:13.695548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.708231] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:46.155 [2024-11-07 09:47:13.708267] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:46.155 [2024-11-07 09:47:13.708284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.708295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:46.155 [2024-11-07 09:47:13.708308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.599 ms 00:18:46.155 [2024-11-07 09:47:13.708319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.732882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.732924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:46.155 [2024-11-07 09:47:13.732939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.515 ms 00:18:46.155 [2024-11-07 09:47:13.732950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.745168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.745205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:46.155 [2024-11-07 09:47:13.745220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:18:46.155 [2024-11-07 09:47:13.745231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.756728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.756760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:46.155 [2024-11-07 09:47:13.756774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.454 ms 00:18:46.155 [2024-11-07 09:47:13.756786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.757485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.757519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.155 [2024-11-07 09:47:13.757533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:18:46.155 [2024-11-07 09:47:13.757547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.155 [2024-11-07 09:47:13.812847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.155 [2024-11-07 09:47:13.812901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:46.155 [2024-11-07 09:47:13.812925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.273 ms 00:18:46.155 [2024-11-07 09:47:13.812937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.823641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:46.417 [2024-11-07 09:47:13.826159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.826198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.417 [2024-11-07 09:47:13.826214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.166 ms 00:18:46.417 [2024-11-07 09:47:13.826226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.826342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.826359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:46.417 [2024-11-07 09:47:13.826372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:46.417 [2024-11-07 09:47:13.826388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.826481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.826496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.417 [2024-11-07 09:47:13.826510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:46.417 [2024-11-07 09:47:13.826522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.826554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.826568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.417 [2024-11-07 09:47:13.826582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.417 [2024-11-07 09:47:13.826594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.826657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:46.417 [2024-11-07 09:47:13.826678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.826690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:46.417 [2024-11-07 09:47:13.826704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:46.417 [2024-11-07 09:47:13.826716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.851418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.851475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.417 [2024-11-07 09:47:13.851493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.670 ms 00:18:46.417 [2024-11-07 09:47:13.851510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.851611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.417 [2024-11-07 09:47:13.851626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.417 [2024-11-07 09:47:13.851662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:46.417 [2024-11-07 09:47:13.851675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.417 [2024-11-07 09:47:13.852732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.661 ms, result 0 00:18:47.830  [2024-11-07T09:47:16.075Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-07T09:47:17.464Z] Copying: 26/1024 [MB] (10 MBps) [2024-11-07T09:47:18.036Z] Copying: 36/1024 [MB] (10 MBps) [2024-11-07T09:47:19.423Z] Copying: 47/1024 [MB] (11 MBps) [2024-11-07T09:47:20.369Z] Copying: 58/1024 [MB] (10 MBps) [2024-11-07T09:47:21.315Z] Copying: 68/1024 [MB] (10 MBps) [2024-11-07T09:47:22.261Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-07T09:47:23.206Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-07T09:47:24.151Z] Copying: 100/1024 [MB] (10 MBps) [2024-11-07T09:47:25.094Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-07T09:47:26.039Z] Copying: 120/1024 [MB] (10 MBps) [2024-11-07T09:47:27.431Z] Copying: 130/1024 [MB] (10 MBps) [2024-11-07T09:47:28.373Z] Copying: 143752/1048576 [kB] (9924 kBps) [2024-11-07T09:47:29.319Z] Copying: 150/1024 [MB] (10 MBps) [2024-11-07T09:47:30.268Z] Copying: 161/1024 [MB] (10 MBps) [2024-11-07T09:47:31.213Z] Copying: 174852/1048576 [kB] (9908 kBps) [2024-11-07T09:47:32.158Z] Copying: 181/1024 [MB] (10 MBps) [2024-11-07T09:47:33.101Z] Copying: 191/1024 [MB] (10 MBps) [2024-11-07T09:47:34.044Z] Copying: 202/1024 [MB] (10 MBps) [2024-11-07T09:47:35.431Z] Copying: 214/1024 [MB] (12 MBps) [2024-11-07T09:47:36.375Z] Copying: 226/1024 [MB] (11 MBps) [2024-11-07T09:47:37.317Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-07T09:47:38.261Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-07T09:47:39.207Z] Copying: 258/1024 [MB] (11 MBps) [2024-11-07T09:47:40.157Z] Copying: 270/1024 [MB] (12 MBps) [2024-11-07T09:47:41.100Z] Copying: 282/1024 [MB] (11 MBps) [2024-11-07T09:47:42.042Z] Copying: 293/1024 [MB] (11 MBps) [2024-11-07T09:47:43.423Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-07T09:47:44.365Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-07T09:47:45.309Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-07T09:47:46.253Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-07T09:47:47.196Z] Copying: 348/1024 [MB] (12 MBps) [2024-11-07T09:47:48.137Z] Copying: 360/1024 [MB] (11 MBps) [2024-11-07T09:47:49.083Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-07T09:47:50.467Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-07T09:47:51.040Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-07T09:47:52.427Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-07T09:47:53.369Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-07T09:47:54.311Z] Copying: 426/1024 [MB] (11 MBps) [2024-11-07T09:47:55.253Z] Copying: 438/1024 [MB] (11 MBps) [2024-11-07T09:47:56.197Z] Copying: 449/1024 [MB] (11 MBps) [2024-11-07T09:47:57.143Z] Copying: 460/1024 [MB] (10 MBps) [2024-11-07T09:47:58.087Z] Copying: 471/1024 [MB] (11 MBps) [2024-11-07T09:47:59.475Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-07T09:48:00.045Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-07T09:48:01.431Z] Copying: 506/1024 [MB] (12 MBps) [2024-11-07T09:48:02.373Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-07T09:48:03.317Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-07T09:48:04.260Z] Copying: 537/1024 [MB] (10 MBps) [2024-11-07T09:48:05.204Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-07T09:48:06.188Z] Copying: 560/1024 [MB] (11 MBps) [2024-11-07T09:48:07.133Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-07T09:48:08.078Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-07T09:48:09.466Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-07T09:48:10.038Z] Copying: 603/1024 [MB] (11 MBps) [2024-11-07T09:48:11.424Z] Copying: 614/1024 [MB] (11 MBps) [2024-11-07T09:48:12.364Z] Copying: 625/1024 [MB] (11 MBps) [2024-11-07T09:48:13.308Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-07T09:48:14.251Z] Copying: 648/1024 [MB] (11 MBps) [2024-11-07T09:48:15.220Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-07T09:48:16.183Z] Copying: 670/1024 [MB] (11 MBps) [2024-11-07T09:48:17.125Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-07T09:48:18.067Z] Copying: 695/1024 [MB] (12 MBps) [2024-11-07T09:48:19.475Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-07T09:48:20.047Z] Copying: 717/1024 [MB] (11 MBps) [2024-11-07T09:48:21.433Z] Copying: 728/1024 [MB] (10 MBps) [2024-11-07T09:48:22.377Z] Copying: 739/1024 [MB] (11 MBps) [2024-11-07T09:48:23.319Z] Copying: 752/1024 [MB] (12 MBps) [2024-11-07T09:48:24.283Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-07T09:48:25.255Z] Copying: 775/1024 [MB] (11 MBps) [2024-11-07T09:48:26.201Z] Copying: 787/1024 [MB] (12 MBps) [2024-11-07T09:48:27.142Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-07T09:48:28.121Z] Copying: 811/1024 [MB] (12 MBps) [2024-11-07T09:48:29.065Z] Copying: 823/1024 [MB] (11 MBps) [2024-11-07T09:48:30.452Z] Copying: 834/1024 [MB] (11 MBps) [2024-11-07T09:48:31.395Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-07T09:48:32.338Z] Copying: 860/1024 [MB] (14 MBps) [2024-11-07T09:48:33.281Z] Copying: 874/1024 [MB] (13 MBps) [2024-11-07T09:48:34.225Z] Copying: 886/1024 [MB] (12 MBps) [2024-11-07T09:48:35.170Z] Copying: 899/1024 [MB] (12 MBps) [2024-11-07T09:48:36.151Z] Copying: 912/1024 [MB] (13 MBps) [2024-11-07T09:48:37.093Z] Copying: 925/1024 [MB] (12 MBps) [2024-11-07T09:48:38.038Z] Copying: 937/1024 [MB] (12 MBps) [2024-11-07T09:48:39.420Z] Copying: 950/1024 [MB] (12 MBps) [2024-11-07T09:48:40.362Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-07T09:48:41.306Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-07T09:48:42.250Z] Copying: 985/1024 [MB] (11 MBps) [2024-11-07T09:48:43.195Z] Copying: 997/1024 [MB] (11 MBps) [2024-11-07T09:48:44.144Z] Copying: 1009/1024 [MB] (12 MBps) [2024-11-07T09:48:44.406Z] Copying: 1021/1024 [MB] (11 MBps) [2024-11-07T09:48:44.406Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-07 09:48:44.361227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.735 [2024-11-07 09:48:44.361281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.735 [2024-11-07 09:48:44.361295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.735 [2024-11-07 09:48:44.361305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.735 [2024-11-07 09:48:44.361327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.735 [2024-11-07 09:48:44.364231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.735 [2024-11-07 09:48:44.364265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.735 [2024-11-07 09:48:44.364281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:20:16.735 [2024-11-07 09:48:44.364290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.735 [2024-11-07 09:48:44.364820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.735 [2024-11-07 09:48:44.364837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.735 [2024-11-07 09:48:44.364847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:20:16.735 [2024-11-07 09:48:44.364855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.735 [2024-11-07 09:48:44.368765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.735 [2024-11-07 09:48:44.368789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.735 [2024-11-07 09:48:44.368800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.895 ms 00:20:16.735 [2024-11-07 09:48:44.368809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.735 [2024-11-07 09:48:44.375276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.735 [2024-11-07 09:48:44.375312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:16.735 [2024-11-07 09:48:44.375326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:20:16.735 [2024-11-07 09:48:44.375335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.735 [2024-11-07 09:48:44.400009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.735 [2024-11-07 09:48:44.400047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.735 [2024-11-07 09:48:44.400059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.615 ms 00:20:16.735 [2024-11-07 09:48:44.400067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.414287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.997 [2024-11-07 09:48:44.414325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.997 [2024-11-07 09:48:44.414338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.182 ms 00:20:16.997 [2024-11-07 09:48:44.414347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.414481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.997 [2024-11-07 09:48:44.414496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.997 [2024-11-07 09:48:44.414504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:16.997 [2024-11-07 09:48:44.414512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.437945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.997 [2024-11-07 09:48:44.437994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:16.997 [2024-11-07 09:48:44.438006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.418 ms 00:20:16.997 [2024-11-07 09:48:44.438014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.461289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.997 [2024-11-07 09:48:44.461336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:16.997 [2024-11-07 09:48:44.461347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.238 ms 00:20:16.997 [2024-11-07 09:48:44.461355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.483996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.997 [2024-11-07 09:48:44.484040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.997 [2024-11-07 09:48:44.484052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.609 ms 00:20:16.997 [2024-11-07 09:48:44.484059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.506701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.997 [2024-11-07 09:48:44.506734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.997 [2024-11-07 09:48:44.506744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:20:16.997 [2024-11-07 09:48:44.506751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.997 [2024-11-07 09:48:44.506783] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.997 [2024-11-07 09:48:44.506799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.506999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.997 [2024-11-07 09:48:44.507101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.998 [2024-11-07 09:48:44.507572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.998 [2024-11-07 09:48:44.507588] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 60039578-222b-4b90-a79a-9095c30dd114 00:20:16.998 [2024-11-07 09:48:44.507596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.998 [2024-11-07 09:48:44.507603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.998 [2024-11-07 09:48:44.507611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.998 [2024-11-07 09:48:44.507618] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.998 [2024-11-07 09:48:44.507625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.998 [2024-11-07 09:48:44.507644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.998 [2024-11-07 09:48:44.507659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.998 [2024-11-07 09:48:44.507665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.998 [2024-11-07 09:48:44.507672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.998 [2024-11-07 09:48:44.507680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.998 [2024-11-07 09:48:44.507687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.998 [2024-11-07 09:48:44.507696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:20:16.998 [2024-11-07 09:48:44.507703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.998 [2024-11-07 09:48:44.520007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.998 [2024-11-07 09:48:44.520036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.998 [2024-11-07 09:48:44.520046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:20:16.998 [2024-11-07 09:48:44.520055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.998 [2024-11-07 09:48:44.520402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.998 [2024-11-07 09:48:44.520416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.998 [2024-11-07 09:48:44.520425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:16.998 [2024-11-07 09:48:44.520436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.998 [2024-11-07 09:48:44.552782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.998 [2024-11-07 09:48:44.552822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.998 [2024-11-07 09:48:44.552834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.998 [2024-11-07 09:48:44.552841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.998 [2024-11-07 09:48:44.552902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.998 [2024-11-07 09:48:44.552910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.998 [2024-11-07 09:48:44.552917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.998 [2024-11-07 09:48:44.552929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.998 [2024-11-07 09:48:44.552989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.998 [2024-11-07 09:48:44.552999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.998 [2024-11-07 09:48:44.553007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.998 [2024-11-07 09:48:44.553014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.998 [2024-11-07 09:48:44.553028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.998 [2024-11-07 09:48:44.553036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.999 [2024-11-07 09:48:44.553044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.999 [2024-11-07 09:48:44.553051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.999 [2024-11-07 09:48:44.628382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.999 [2024-11-07 09:48:44.628429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.999 [2024-11-07 09:48:44.628440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.999 [2024-11-07 09:48:44.628448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.690661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.690700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.260 [2024-11-07 09:48:44.690711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.690719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.690789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.690798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.260 [2024-11-07 09:48:44.690806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.690813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.690845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.690855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.260 [2024-11-07 09:48:44.690862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.690869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.690953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.690962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.260 [2024-11-07 09:48:44.690970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.690977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.691004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.691012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.260 [2024-11-07 09:48:44.691020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.691027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.691061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.691072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.260 [2024-11-07 09:48:44.691080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.691086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.691122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.260 [2024-11-07 09:48:44.691132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.260 [2024-11-07 09:48:44.691139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.260 [2024-11-07 09:48:44.691146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-11-07 09:48:44.691254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.004 ms, result 0 00:20:17.830 00:20:17.830 00:20:17.830 09:48:45 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.743 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:19.743 09:48:47 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:19.743 [2024-11-07 09:48:47.150143] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:19.743 [2024-11-07 09:48:47.150271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75896 ] 00:20:19.743 [2024-11-07 09:48:47.307571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.743 [2024-11-07 09:48:47.405239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.002 [2024-11-07 09:48:47.654489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.002 [2024-11-07 09:48:47.654558] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:20.262 [2024-11-07 09:48:47.808263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.808320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:20.262 [2024-11-07 09:48:47.808338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.262 [2024-11-07 09:48:47.808346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.808392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.808402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.262 [2024-11-07 09:48:47.808412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:20.262 [2024-11-07 09:48:47.808419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.808437] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:20.262 [2024-11-07 09:48:47.809159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:20.262 [2024-11-07 09:48:47.809180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.809189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.262 [2024-11-07 09:48:47.809197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:20:20.262 [2024-11-07 09:48:47.809204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.810235] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:20.262 [2024-11-07 09:48:47.822445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.822482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:20.262 [2024-11-07 09:48:47.822494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.212 ms 00:20:20.262 [2024-11-07 09:48:47.822502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.822558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.822568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:20.262 [2024-11-07 09:48:47.822576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:20.262 [2024-11-07 09:48:47.822583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.827196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.827227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.262 [2024-11-07 09:48:47.827237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:20:20.262 [2024-11-07 09:48:47.827248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.827335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.827345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.262 [2024-11-07 09:48:47.827357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:20.262 [2024-11-07 09:48:47.827364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.827407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.827416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:20.262 [2024-11-07 09:48:47.827427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:20.262 [2024-11-07 09:48:47.827435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.827462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.262 [2024-11-07 09:48:47.830750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.830779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.262 [2024-11-07 09:48:47.830790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.293 ms 00:20:20.262 [2024-11-07 09:48:47.830797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.830827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.830836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:20.262 [2024-11-07 09:48:47.830844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:20.262 [2024-11-07 09:48:47.830851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.830870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:20.262 [2024-11-07 09:48:47.830887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:20.262 [2024-11-07 09:48:47.830921] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:20.262 [2024-11-07 09:48:47.830938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:20.262 [2024-11-07 09:48:47.831039] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:20.262 [2024-11-07 09:48:47.831056] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:20.262 [2024-11-07 09:48:47.831067] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:20.262 [2024-11-07 09:48:47.831077] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:20.262 [2024-11-07 09:48:47.831087] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:20.262 [2024-11-07 09:48:47.831094] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:20.262 [2024-11-07 09:48:47.831101] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:20.262 [2024-11-07 09:48:47.831109] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:20.262 [2024-11-07 09:48:47.831119] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:20.262 [2024-11-07 09:48:47.831126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.831134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:20.262 [2024-11-07 09:48:47.831141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:20.262 [2024-11-07 09:48:47.831148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.831229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.262 [2024-11-07 09:48:47.831237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:20.262 [2024-11-07 09:48:47.831245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:20.262 [2024-11-07 09:48:47.831252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.262 [2024-11-07 09:48:47.831363] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:20.262 [2024-11-07 09:48:47.831379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:20.262 [2024-11-07 09:48:47.831387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.262 [2024-11-07 09:48:47.831395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:20.262 [2024-11-07 09:48:47.831409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:20.262 [2024-11-07 09:48:47.831423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:20.262 [2024-11-07 09:48:47.831431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.262 [2024-11-07 09:48:47.831444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:20.262 [2024-11-07 09:48:47.831451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:20.262 [2024-11-07 09:48:47.831457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:20.262 [2024-11-07 09:48:47.831464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:20.262 [2024-11-07 09:48:47.831471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:20.262 [2024-11-07 09:48:47.831484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:20.262 [2024-11-07 09:48:47.831497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:20.262 [2024-11-07 09:48:47.831503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:20.262 [2024-11-07 09:48:47.831516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.262 [2024-11-07 09:48:47.831529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:20.262 [2024-11-07 09:48:47.831535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:20.262 [2024-11-07 09:48:47.831541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.263 [2024-11-07 09:48:47.831547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:20.263 [2024-11-07 09:48:47.831554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:20.263 [2024-11-07 09:48:47.831560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.263 [2024-11-07 09:48:47.831566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:20.263 [2024-11-07 09:48:47.831573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:20.263 [2024-11-07 09:48:47.831579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:20.263 [2024-11-07 09:48:47.831586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:20.263 [2024-11-07 09:48:47.831592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:20.263 [2024-11-07 09:48:47.831598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.263 [2024-11-07 09:48:47.831604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:20.263 [2024-11-07 09:48:47.831611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:20.263 [2024-11-07 09:48:47.831617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:20.263 [2024-11-07 09:48:47.831624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:20.263 [2024-11-07 09:48:47.831649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:20.263 [2024-11-07 09:48:47.831656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.263 [2024-11-07 09:48:47.831662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:20.263 [2024-11-07 09:48:47.831669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:20.263 [2024-11-07 09:48:47.831675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.263 [2024-11-07 09:48:47.831683] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:20.263 [2024-11-07 09:48:47.831690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:20.263 [2024-11-07 09:48:47.831697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:20.263 [2024-11-07 09:48:47.831704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:20.263 [2024-11-07 09:48:47.831712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:20.263 [2024-11-07 09:48:47.831720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:20.263 [2024-11-07 09:48:47.831726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:20.263 [2024-11-07 09:48:47.831733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:20.263 [2024-11-07 09:48:47.831739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:20.263 [2024-11-07 09:48:47.831746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:20.263 [2024-11-07 09:48:47.831754] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:20.263 [2024-11-07 09:48:47.831763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:20.263 [2024-11-07 09:48:47.831781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:20.263 [2024-11-07 09:48:47.831788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:20.263 [2024-11-07 09:48:47.831795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:20.263 [2024-11-07 09:48:47.831802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:20.263 [2024-11-07 09:48:47.831808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:20.263 [2024-11-07 09:48:47.831815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:20.263 [2024-11-07 09:48:47.831822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:20.263 [2024-11-07 09:48:47.831829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:20.263 [2024-11-07 09:48:47.831836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:20.263 [2024-11-07 09:48:47.831870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:20.263 [2024-11-07 09:48:47.831878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:20.263 [2024-11-07 09:48:47.831893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:20.263 [2024-11-07 09:48:47.831900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:20.263 [2024-11-07 09:48:47.831907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:20.263 [2024-11-07 09:48:47.831914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.831921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:20.263 [2024-11-07 09:48:47.831929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:20:20.263 [2024-11-07 09:48:47.831936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.857174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.857213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.263 [2024-11-07 09:48:47.857224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.184 ms 00:20:20.263 [2024-11-07 09:48:47.857235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.857319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.857328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:20.263 [2024-11-07 09:48:47.857335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:20.263 [2024-11-07 09:48:47.857342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.901403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.901452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.263 [2024-11-07 09:48:47.901465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.005 ms 00:20:20.263 [2024-11-07 09:48:47.901473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.901523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.901533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.263 [2024-11-07 09:48:47.901545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.263 [2024-11-07 09:48:47.901553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.901938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.901963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.263 [2024-11-07 09:48:47.901973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:20.263 [2024-11-07 09:48:47.901981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.902115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.902134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.263 [2024-11-07 09:48:47.902144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:20:20.263 [2024-11-07 09:48:47.902152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.914871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.914908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.263 [2024-11-07 09:48:47.914918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.702 ms 00:20:20.263 [2024-11-07 09:48:47.914926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.263 [2024-11-07 09:48:47.927212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:20.263 [2024-11-07 09:48:47.927246] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:20.263 [2024-11-07 09:48:47.927257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.263 [2024-11-07 09:48:47.927271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:20.263 [2024-11-07 09:48:47.927281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.234 ms 00:20:20.263 [2024-11-07 09:48:47.927288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:47.951573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:47.951613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:20.523 [2024-11-07 09:48:47.951624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.247 ms 00:20:20.523 [2024-11-07 09:48:47.951640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:47.963269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:47.963301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:20.523 [2024-11-07 09:48:47.963311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.584 ms 00:20:20.523 [2024-11-07 09:48:47.963318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:47.974651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:47.974691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:20.523 [2024-11-07 09:48:47.974701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.302 ms 00:20:20.523 [2024-11-07 09:48:47.974708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:47.975308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:47.975337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:20.523 [2024-11-07 09:48:47.975349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:20:20.523 [2024-11-07 09:48:47.975356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:48.029098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:48.029147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:20.523 [2024-11-07 09:48:48.029163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.725 ms 00:20:20.523 [2024-11-07 09:48:48.029171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:48.039482] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:20.523 [2024-11-07 09:48:48.041782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:48.041813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:20.523 [2024-11-07 09:48:48.041825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.567 ms 00:20:20.523 [2024-11-07 09:48:48.041834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:48.041923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:48.041933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:20.523 [2024-11-07 09:48:48.041942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:20.523 [2024-11-07 09:48:48.041951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:48.042014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:48.042023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:20.523 [2024-11-07 09:48:48.042032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:20.523 [2024-11-07 09:48:48.042039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.523 [2024-11-07 09:48:48.042058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.523 [2024-11-07 09:48:48.042065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:20.523 [2024-11-07 09:48:48.042073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.523 [2024-11-07 09:48:48.042081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.524 [2024-11-07 09:48:48.042112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:20.524 [2024-11-07 09:48:48.042122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.524 [2024-11-07 09:48:48.042130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:20.524 [2024-11-07 09:48:48.042137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:20.524 [2024-11-07 09:48:48.042145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.524 [2024-11-07 09:48:48.065440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.524 [2024-11-07 09:48:48.065471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:20.524 [2024-11-07 09:48:48.065485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.278 ms 00:20:20.524 [2024-11-07 09:48:48.065493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.524 [2024-11-07 09:48:48.065562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.524 [2024-11-07 09:48:48.065571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:20.524 [2024-11-07 09:48:48.065580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:20.524 [2024-11-07 09:48:48.065588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.524 [2024-11-07 09:48:48.066784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.089 ms, result 0 00:20:21.457  [2024-11-07T09:48:50.104Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-07T09:48:51.475Z] Copying: 88/1024 [MB] (43 MBps) [2024-11-07T09:48:52.435Z] Copying: 133/1024 [MB] (45 MBps) [2024-11-07T09:48:53.369Z] Copying: 179/1024 [MB] (46 MBps) [2024-11-07T09:48:54.302Z] Copying: 224/1024 [MB] (44 MBps) [2024-11-07T09:48:55.239Z] Copying: 269/1024 [MB] (45 MBps) [2024-11-07T09:48:56.180Z] Copying: 311/1024 [MB] (41 MBps) [2024-11-07T09:48:57.117Z] Copying: 357/1024 [MB] (46 MBps) [2024-11-07T09:48:58.498Z] Copying: 393/1024 [MB] (35 MBps) [2024-11-07T09:48:59.438Z] Copying: 420/1024 [MB] (26 MBps) [2024-11-07T09:49:00.379Z] Copying: 446/1024 [MB] (26 MBps) [2024-11-07T09:49:01.330Z] Copying: 468/1024 [MB] (22 MBps) [2024-11-07T09:49:02.302Z] Copying: 493/1024 [MB] (24 MBps) [2024-11-07T09:49:03.242Z] Copying: 516/1024 [MB] (22 MBps) [2024-11-07T09:49:04.187Z] Copying: 546/1024 [MB] (30 MBps) [2024-11-07T09:49:05.132Z] Copying: 569/1024 [MB] (23 MBps) [2024-11-07T09:49:06.517Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-07T09:49:07.086Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-07T09:49:08.458Z] Copying: 601/1024 [MB] (10 MBps) [2024-11-07T09:49:09.390Z] Copying: 647/1024 [MB] (45 MBps) [2024-11-07T09:49:10.330Z] Copying: 700/1024 [MB] (53 MBps) [2024-11-07T09:49:11.272Z] Copying: 729/1024 [MB] (29 MBps) [2024-11-07T09:49:12.217Z] Copying: 744/1024 [MB] (14 MBps) [2024-11-07T09:49:13.155Z] Copying: 763/1024 [MB] (19 MBps) [2024-11-07T09:49:14.089Z] Copying: 798/1024 [MB] (34 MBps) [2024-11-07T09:49:15.461Z] Copying: 844/1024 [MB] (46 MBps) [2024-11-07T09:49:16.394Z] Copying: 890/1024 [MB] (45 MBps) [2024-11-07T09:49:17.354Z] Copying: 937/1024 [MB] (46 MBps) [2024-11-07T09:49:18.287Z] Copying: 983/1024 [MB] (46 MBps) [2024-11-07T09:49:19.223Z] Copying: 1023/1024 [MB] (39 MBps) [2024-11-07T09:49:19.223Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-07 09:49:19.007734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.007791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:51.552 [2024-11-07 09:49:19.007812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:51.552 [2024-11-07 09:49:19.007821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.009577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.552 [2024-11-07 09:49:19.014123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.014158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:51.552 [2024-11-07 09:49:19.014168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:20:51.552 [2024-11-07 09:49:19.014176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.026177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.026211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:51.552 [2024-11-07 09:49:19.026221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.891 ms 00:20:51.552 [2024-11-07 09:49:19.026236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.044601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.044642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:51.552 [2024-11-07 09:49:19.044652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.350 ms 00:20:51.552 [2024-11-07 09:49:19.044660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.050742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.050770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:51.552 [2024-11-07 09:49:19.050780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.056 ms 00:20:51.552 [2024-11-07 09:49:19.050788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.074397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.074435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:51.552 [2024-11-07 09:49:19.074445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.564 ms 00:20:51.552 [2024-11-07 09:49:19.074452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.088598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.088642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:51.552 [2024-11-07 09:49:19.088653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.115 ms 00:20:51.552 [2024-11-07 09:49:19.088661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.141065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.141101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:51.552 [2024-11-07 09:49:19.141111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.370 ms 00:20:51.552 [2024-11-07 09:49:19.141120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.164177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.164207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:51.552 [2024-11-07 09:49:19.164217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.043 ms 00:20:51.552 [2024-11-07 09:49:19.164225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.187096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.187132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:51.552 [2024-11-07 09:49:19.187142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.842 ms 00:20:51.552 [2024-11-07 09:49:19.187149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.552 [2024-11-07 09:49:19.209118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.552 [2024-11-07 09:49:19.209147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:51.552 [2024-11-07 09:49:19.209157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.937 ms 00:20:51.552 [2024-11-07 09:49:19.209164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.812 [2024-11-07 09:49:19.231591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.812 [2024-11-07 09:49:19.231624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:51.812 [2024-11-07 09:49:19.231648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:20:51.812 [2024-11-07 09:49:19.231656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.812 [2024-11-07 09:49:19.231686] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:51.812 [2024-11-07 09:49:19.231699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121856 / 261120 wr_cnt: 1 state: open 00:20:51.812 [2024-11-07 09:49:19.231709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.231994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.232001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.232008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.232016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.232023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.232031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:51.812 [2024-11-07 09:49:19.232038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:51.813 [2024-11-07 09:49:19.232438] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:51.813 [2024-11-07 09:49:19.232445] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 60039578-222b-4b90-a79a-9095c30dd114 00:20:51.813 [2024-11-07 09:49:19.232453] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121856 00:20:51.813 [2024-11-07 09:49:19.232460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122816 00:20:51.813 [2024-11-07 09:49:19.232467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121856 00:20:51.813 [2024-11-07 09:49:19.232475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:20:51.813 [2024-11-07 09:49:19.232486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:51.813 [2024-11-07 09:49:19.232494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:51.813 [2024-11-07 09:49:19.232506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:51.813 [2024-11-07 09:49:19.232513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:51.813 [2024-11-07 09:49:19.232519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:51.813 [2024-11-07 09:49:19.232526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.813 [2024-11-07 09:49:19.232533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:51.813 [2024-11-07 09:49:19.232542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:20:51.813 [2024-11-07 09:49:19.232549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.244666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.813 [2024-11-07 09:49:19.244697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:51.813 [2024-11-07 09:49:19.244711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.102 ms 00:20:51.813 [2024-11-07 09:49:19.244719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.245051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.813 [2024-11-07 09:49:19.245067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:51.813 [2024-11-07 09:49:19.245075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:51.813 [2024-11-07 09:49:19.245082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.277488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.813 [2024-11-07 09:49:19.277526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.813 [2024-11-07 09:49:19.277535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.813 [2024-11-07 09:49:19.277543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.277600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.813 [2024-11-07 09:49:19.277608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.813 [2024-11-07 09:49:19.277616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.813 [2024-11-07 09:49:19.277623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.277686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.813 [2024-11-07 09:49:19.277696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.813 [2024-11-07 09:49:19.277706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.813 [2024-11-07 09:49:19.277713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.277727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.813 [2024-11-07 09:49:19.277735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.813 [2024-11-07 09:49:19.277742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.813 [2024-11-07 09:49:19.277749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.353619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.813 [2024-11-07 09:49:19.353678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.813 [2024-11-07 09:49:19.353690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.813 [2024-11-07 09:49:19.353697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.813 [2024-11-07 09:49:19.415652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.415696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.814 [2024-11-07 09:49:19.415707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.415716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.415787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.415797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.814 [2024-11-07 09:49:19.415805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.415814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.415847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.415856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.814 [2024-11-07 09:49:19.415863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.415870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.415954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.415965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.814 [2024-11-07 09:49:19.415973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.415980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.416012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.416027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:51.814 [2024-11-07 09:49:19.416034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.416042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.416074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.416083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.814 [2024-11-07 09:49:19.416090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.416097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.416138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.814 [2024-11-07 09:49:19.416148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.814 [2024-11-07 09:49:19.416156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.814 [2024-11-07 09:49:19.416163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.814 [2024-11-07 09:49:19.416270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.654 ms, result 0 00:20:53.716 00:20:53.716 00:20:53.716 09:49:21 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:53.716 [2024-11-07 09:49:21.260625] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:53.717 [2024-11-07 09:49:21.260764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76243 ] 00:20:53.975 [2024-11-07 09:49:21.418886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.975 [2024-11-07 09:49:21.514656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.232 [2024-11-07 09:49:21.765427] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.232 [2024-11-07 09:49:21.765483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.492 [2024-11-07 09:49:21.918595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.918651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.492 [2024-11-07 09:49:21.918669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.492 [2024-11-07 09:49:21.918677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.918719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.918728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.492 [2024-11-07 09:49:21.918738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:54.492 [2024-11-07 09:49:21.918745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.918764] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.492 [2024-11-07 09:49:21.919408] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.492 [2024-11-07 09:49:21.919424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.919432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.492 [2024-11-07 09:49:21.919440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:20:54.492 [2024-11-07 09:49:21.919447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.920445] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.492 [2024-11-07 09:49:21.932613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.932648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.492 [2024-11-07 09:49:21.932659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:20:54.492 [2024-11-07 09:49:21.932667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.932718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.932726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.492 [2024-11-07 09:49:21.932734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:54.492 [2024-11-07 09:49:21.932741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.937242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.937267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.492 [2024-11-07 09:49:21.937276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.443 ms 00:20:54.492 [2024-11-07 09:49:21.937283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.937349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.937358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.492 [2024-11-07 09:49:21.937366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:54.492 [2024-11-07 09:49:21.937373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.937413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.937422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.492 [2024-11-07 09:49:21.937429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:54.492 [2024-11-07 09:49:21.937436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.937456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.492 [2024-11-07 09:49:21.940764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.940787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.492 [2024-11-07 09:49:21.940795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:20:54.492 [2024-11-07 09:49:21.940805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.940831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.492 [2024-11-07 09:49:21.940839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.492 [2024-11-07 09:49:21.940847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.492 [2024-11-07 09:49:21.940854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.492 [2024-11-07 09:49:21.940873] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.492 [2024-11-07 09:49:21.940890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.492 [2024-11-07 09:49:21.940923] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.492 [2024-11-07 09:49:21.940940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.492 [2024-11-07 09:49:21.941040] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.492 [2024-11-07 09:49:21.941049] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.492 [2024-11-07 09:49:21.941059] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.493 [2024-11-07 09:49:21.941069] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941077] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941085] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.493 [2024-11-07 09:49:21.941092] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.493 [2024-11-07 09:49:21.941099] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.493 [2024-11-07 09:49:21.941106] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.493 [2024-11-07 09:49:21.941115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.493 [2024-11-07 09:49:21.941122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.493 [2024-11-07 09:49:21.941130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:54.493 [2024-11-07 09:49:21.941137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.493 [2024-11-07 09:49:21.941218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.493 [2024-11-07 09:49:21.941226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.493 [2024-11-07 09:49:21.941233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:54.493 [2024-11-07 09:49:21.941240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.493 [2024-11-07 09:49:21.941339] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.493 [2024-11-07 09:49:21.941350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.493 [2024-11-07 09:49:21.941358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.493 [2024-11-07 09:49:21.941379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.493 [2024-11-07 09:49:21.941400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.493 [2024-11-07 09:49:21.941413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.493 [2024-11-07 09:49:21.941420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.493 [2024-11-07 09:49:21.941427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.493 [2024-11-07 09:49:21.941433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.493 [2024-11-07 09:49:21.941440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:54.493 [2024-11-07 09:49:21.941451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.493 [2024-11-07 09:49:21.941464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.493 [2024-11-07 09:49:21.941485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.493 [2024-11-07 09:49:21.941505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.493 [2024-11-07 09:49:21.941525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.493 [2024-11-07 09:49:21.941544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.493 [2024-11-07 09:49:21.941563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.493 [2024-11-07 09:49:21.941576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.493 [2024-11-07 09:49:21.941582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:54.493 [2024-11-07 09:49:21.941588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.493 [2024-11-07 09:49:21.941595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.493 [2024-11-07 09:49:21.941601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:54.493 [2024-11-07 09:49:21.941608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.493 [2024-11-07 09:49:21.941621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:54.493 [2024-11-07 09:49:21.941640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941647] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.493 [2024-11-07 09:49:21.941654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.493 [2024-11-07 09:49:21.941662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.493 [2024-11-07 09:49:21.941676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.493 [2024-11-07 09:49:21.941683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.493 [2024-11-07 09:49:21.941690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.493 [2024-11-07 09:49:21.941696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.493 [2024-11-07 09:49:21.941702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.493 [2024-11-07 09:49:21.941710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.493 [2024-11-07 09:49:21.941718] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.493 [2024-11-07 09:49:21.941727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.493 [2024-11-07 09:49:21.941742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:54.493 [2024-11-07 09:49:21.941750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:54.493 [2024-11-07 09:49:21.941757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:54.493 [2024-11-07 09:49:21.941764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:54.493 [2024-11-07 09:49:21.941771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:54.493 [2024-11-07 09:49:21.941778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:54.493 [2024-11-07 09:49:21.941785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:54.493 [2024-11-07 09:49:21.941792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:54.493 [2024-11-07 09:49:21.941799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:54.493 [2024-11-07 09:49:21.941834] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.493 [2024-11-07 09:49:21.941844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.493 [2024-11-07 09:49:21.941860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.493 [2024-11-07 09:49:21.941867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.493 [2024-11-07 09:49:21.941874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.493 [2024-11-07 09:49:21.941881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.493 [2024-11-07 09:49:21.941888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.493 [2024-11-07 09:49:21.941896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:20:54.493 [2024-11-07 09:49:21.941903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.493 [2024-11-07 09:49:21.967119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:21.967147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.494 [2024-11-07 09:49:21.967157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:20:54.494 [2024-11-07 09:49:21.967165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:21.967247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:21.967255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.494 [2024-11-07 09:49:21.967263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:54.494 [2024-11-07 09:49:21.967282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.009503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.009538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.494 [2024-11-07 09:49:22.009549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.164 ms 00:20:54.494 [2024-11-07 09:49:22.009557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.009595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.009604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.494 [2024-11-07 09:49:22.009612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.494 [2024-11-07 09:49:22.009622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.009965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.009988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.494 [2024-11-07 09:49:22.009997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:54.494 [2024-11-07 09:49:22.010004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.010122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.010131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.494 [2024-11-07 09:49:22.010138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:54.494 [2024-11-07 09:49:22.010146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.023150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.023178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.494 [2024-11-07 09:49:22.023188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.981 ms 00:20:54.494 [2024-11-07 09:49:22.023199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.035541] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:54.494 [2024-11-07 09:49:22.035572] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.494 [2024-11-07 09:49:22.035582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.035590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.494 [2024-11-07 09:49:22.035598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.265 ms 00:20:54.494 [2024-11-07 09:49:22.035605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.059653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.059687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.494 [2024-11-07 09:49:22.059697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.995 ms 00:20:54.494 [2024-11-07 09:49:22.059704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.071086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.071118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.494 [2024-11-07 09:49:22.071128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.348 ms 00:20:54.494 [2024-11-07 09:49:22.071135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.082058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.082093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.494 [2024-11-07 09:49:22.082103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.893 ms 00:20:54.494 [2024-11-07 09:49:22.082111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.082728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.082747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.494 [2024-11-07 09:49:22.082755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:20:54.494 [2024-11-07 09:49:22.082766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.137143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.137185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.494 [2024-11-07 09:49:22.137201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.360 ms 00:20:54.494 [2024-11-07 09:49:22.137210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.147357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:54.494 [2024-11-07 09:49:22.149536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.149562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.494 [2024-11-07 09:49:22.149573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.284 ms 00:20:54.494 [2024-11-07 09:49:22.149581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.149675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.149687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.494 [2024-11-07 09:49:22.149697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:54.494 [2024-11-07 09:49:22.149708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.151043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.151072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.494 [2024-11-07 09:49:22.151081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:20:54.494 [2024-11-07 09:49:22.151089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.151110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.151118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.494 [2024-11-07 09:49:22.151127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.494 [2024-11-07 09:49:22.151134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.494 [2024-11-07 09:49:22.151166] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.494 [2024-11-07 09:49:22.151178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.494 [2024-11-07 09:49:22.151185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.494 [2024-11-07 09:49:22.151193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:54.494 [2024-11-07 09:49:22.151200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.753 [2024-11-07 09:49:22.173902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.753 [2024-11-07 09:49:22.173930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.753 [2024-11-07 09:49:22.173940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.686 ms 00:20:54.753 [2024-11-07 09:49:22.173951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.753 [2024-11-07 09:49:22.174017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.753 [2024-11-07 09:49:22.174026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.753 [2024-11-07 09:49:22.174034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:54.753 [2024-11-07 09:49:22.174042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.753 [2024-11-07 09:49:22.174897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.887 ms, result 0 00:20:56.126  [2024-11-07T09:49:24.363Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-07T09:49:25.755Z] Copying: 93/1024 [MB] (48 MBps) [2024-11-07T09:49:26.688Z] Copying: 143/1024 [MB] (49 MBps) [2024-11-07T09:49:27.623Z] Copying: 192/1024 [MB] (48 MBps) [2024-11-07T09:49:28.558Z] Copying: 241/1024 [MB] (49 MBps) [2024-11-07T09:49:29.502Z] Copying: 289/1024 [MB] (47 MBps) [2024-11-07T09:49:30.445Z] Copying: 310/1024 [MB] (20 MBps) [2024-11-07T09:49:31.388Z] Copying: 332/1024 [MB] (22 MBps) [2024-11-07T09:49:32.766Z] Copying: 359/1024 [MB] (27 MBps) [2024-11-07T09:49:33.707Z] Copying: 396/1024 [MB] (36 MBps) [2024-11-07T09:49:34.649Z] Copying: 422/1024 [MB] (25 MBps) [2024-11-07T09:49:35.591Z] Copying: 442/1024 [MB] (20 MBps) [2024-11-07T09:49:36.533Z] Copying: 461/1024 [MB] (19 MBps) [2024-11-07T09:49:37.517Z] Copying: 480/1024 [MB] (18 MBps) [2024-11-07T09:49:38.472Z] Copying: 500/1024 [MB] (20 MBps) [2024-11-07T09:49:39.416Z] Copying: 520/1024 [MB] (19 MBps) [2024-11-07T09:49:40.799Z] Copying: 544/1024 [MB] (23 MBps) [2024-11-07T09:49:41.386Z] Copying: 569/1024 [MB] (25 MBps) [2024-11-07T09:49:42.363Z] Copying: 592/1024 [MB] (22 MBps) [2024-11-07T09:49:43.748Z] Copying: 612/1024 [MB] (20 MBps) [2024-11-07T09:49:44.692Z] Copying: 633/1024 [MB] (20 MBps) [2024-11-07T09:49:45.636Z] Copying: 645/1024 [MB] (11 MBps) [2024-11-07T09:49:46.601Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-07T09:49:47.541Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-07T09:49:48.484Z] Copying: 679/1024 [MB] (11 MBps) [2024-11-07T09:49:49.428Z] Copying: 691/1024 [MB] (12 MBps) [2024-11-07T09:49:50.371Z] Copying: 703/1024 [MB] (11 MBps) [2024-11-07T09:49:51.759Z] Copying: 715/1024 [MB] (12 MBps) [2024-11-07T09:49:52.702Z] Copying: 726/1024 [MB] (11 MBps) [2024-11-07T09:49:53.653Z] Copying: 739/1024 [MB] (12 MBps) [2024-11-07T09:49:54.614Z] Copying: 750/1024 [MB] (11 MBps) [2024-11-07T09:49:55.559Z] Copying: 762/1024 [MB] (11 MBps) [2024-11-07T09:49:56.503Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-07T09:49:57.444Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-07T09:49:58.399Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-07T09:49:59.377Z] Copying: 808/1024 [MB] (11 MBps) [2024-11-07T09:50:00.764Z] Copying: 820/1024 [MB] (11 MBps) [2024-11-07T09:50:01.705Z] Copying: 834/1024 [MB] (14 MBps) [2024-11-07T09:50:02.646Z] Copying: 846/1024 [MB] (11 MBps) [2024-11-07T09:50:03.591Z] Copying: 858/1024 [MB] (12 MBps) [2024-11-07T09:50:04.536Z] Copying: 870/1024 [MB] (12 MBps) [2024-11-07T09:50:05.486Z] Copying: 881/1024 [MB] (11 MBps) [2024-11-07T09:50:06.431Z] Copying: 893/1024 [MB] (11 MBps) [2024-11-07T09:50:07.374Z] Copying: 904/1024 [MB] (11 MBps) [2024-11-07T09:50:08.759Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-07T09:50:09.701Z] Copying: 927/1024 [MB] (11 MBps) [2024-11-07T09:50:10.645Z] Copying: 938/1024 [MB] (11 MBps) [2024-11-07T09:50:11.583Z] Copying: 951/1024 [MB] (13 MBps) [2024-11-07T09:50:12.525Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-07T09:50:13.466Z] Copying: 973/1024 [MB] (10 MBps) [2024-11-07T09:50:14.409Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-07T09:50:15.385Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-07T09:50:16.765Z] Copying: 1006/1024 [MB] (11 MBps) [2024-11-07T09:50:17.025Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-07T09:50:17.025Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-07 09:50:16.989217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.354 [2024-11-07 09:50:16.989285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:49.354 [2024-11-07 09:50:16.989302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:49.354 [2024-11-07 09:50:16.989316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.354 [2024-11-07 09:50:16.989351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:49.354 [2024-11-07 09:50:16.992490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.354 [2024-11-07 09:50:16.992516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:49.354 [2024-11-07 09:50:16.992528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:21:49.354 [2024-11-07 09:50:16.992537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.354 [2024-11-07 09:50:16.992768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.354 [2024-11-07 09:50:16.992778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:49.354 [2024-11-07 09:50:16.992787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:21:49.354 [2024-11-07 09:50:16.992794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.354 [2024-11-07 09:50:16.997226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.354 [2024-11-07 09:50:16.997266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:49.354 [2024-11-07 09:50:16.997277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:21:49.354 [2024-11-07 09:50:16.997285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.354 [2024-11-07 09:50:17.004474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.354 [2024-11-07 09:50:17.004505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:49.354 [2024-11-07 09:50:17.004516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.322 ms 00:21:49.354 [2024-11-07 09:50:17.004525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.614 [2024-11-07 09:50:17.029361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.614 [2024-11-07 09:50:17.029393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:49.614 [2024-11-07 09:50:17.029405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.764 ms 00:21:49.614 [2024-11-07 09:50:17.029414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.614 [2024-11-07 09:50:17.043467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.614 [2024-11-07 09:50:17.043500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:49.614 [2024-11-07 09:50:17.043512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.017 ms 00:21:49.614 [2024-11-07 09:50:17.043521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.875 [2024-11-07 09:50:17.297853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.875 [2024-11-07 09:50:17.297929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:49.875 [2024-11-07 09:50:17.297943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 254.294 ms 00:21:49.875 [2024-11-07 09:50:17.297952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.875 [2024-11-07 09:50:17.322599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.875 [2024-11-07 09:50:17.322656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:49.875 [2024-11-07 09:50:17.322669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.631 ms 00:21:49.876 [2024-11-07 09:50:17.322676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.876 [2024-11-07 09:50:17.345889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.876 [2024-11-07 09:50:17.345930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:49.876 [2024-11-07 09:50:17.345951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.178 ms 00:21:49.876 [2024-11-07 09:50:17.345958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.876 [2024-11-07 09:50:17.368375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.876 [2024-11-07 09:50:17.368406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:49.876 [2024-11-07 09:50:17.368418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.383 ms 00:21:49.876 [2024-11-07 09:50:17.368425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.876 [2024-11-07 09:50:17.390857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.876 [2024-11-07 09:50:17.390887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:49.876 [2024-11-07 09:50:17.390898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.379 ms 00:21:49.876 [2024-11-07 09:50:17.390905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.876 [2024-11-07 09:50:17.390933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:49.876 [2024-11-07 09:50:17.390948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:49.876 [2024-11-07 09:50:17.390958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.390966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.390973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.390981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.390988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.390997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:49.876 [2024-11-07 09:50:17.391348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:49.877 [2024-11-07 09:50:17.391747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:49.877 [2024-11-07 09:50:17.391755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 60039578-222b-4b90-a79a-9095c30dd114 00:21:49.877 [2024-11-07 09:50:17.391763] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:49.877 [2024-11-07 09:50:17.391771] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 10176 00:21:49.877 [2024-11-07 09:50:17.391778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 9216 00:21:49.877 [2024-11-07 09:50:17.391786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1042 00:21:49.877 [2024-11-07 09:50:17.391793] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:49.877 [2024-11-07 09:50:17.391808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:49.877 [2024-11-07 09:50:17.391816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:49.877 [2024-11-07 09:50:17.391833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:49.877 [2024-11-07 09:50:17.391840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:49.877 [2024-11-07 09:50:17.391847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.877 [2024-11-07 09:50:17.391855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:49.877 [2024-11-07 09:50:17.391864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:21:49.877 [2024-11-07 09:50:17.391875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.877 [2024-11-07 09:50:17.403984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.877 [2024-11-07 09:50:17.404012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:49.877 [2024-11-07 09:50:17.404023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.077 ms 00:21:49.877 [2024-11-07 09:50:17.404035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.877 [2024-11-07 09:50:17.404390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.877 [2024-11-07 09:50:17.404407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:49.877 [2024-11-07 09:50:17.404415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:21:49.877 [2024-11-07 09:50:17.404422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.877 [2024-11-07 09:50:17.436667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.878 [2024-11-07 09:50:17.436701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:49.878 [2024-11-07 09:50:17.436713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.878 [2024-11-07 09:50:17.436720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.878 [2024-11-07 09:50:17.436772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.878 [2024-11-07 09:50:17.436780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:49.878 [2024-11-07 09:50:17.436788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.878 [2024-11-07 09:50:17.436795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.878 [2024-11-07 09:50:17.436846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.878 [2024-11-07 09:50:17.436855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:49.878 [2024-11-07 09:50:17.436863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.878 [2024-11-07 09:50:17.436873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.878 [2024-11-07 09:50:17.436887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.878 [2024-11-07 09:50:17.436895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:49.878 [2024-11-07 09:50:17.436902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.878 [2024-11-07 09:50:17.436909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.878 [2024-11-07 09:50:17.512787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:49.878 [2024-11-07 09:50:17.512825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:49.878 [2024-11-07 09:50:17.512840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:49.878 [2024-11-07 09:50:17.512847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.138 [2024-11-07 09:50:17.574392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.138 [2024-11-07 09:50:17.574434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.138 [2024-11-07 09:50:17.574445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.138 [2024-11-07 09:50:17.574455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.138 [2024-11-07 09:50:17.574518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.138 [2024-11-07 09:50:17.574527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.138 [2024-11-07 09:50:17.574535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.138 [2024-11-07 09:50:17.574543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.138 [2024-11-07 09:50:17.574580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.138 [2024-11-07 09:50:17.574589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.138 [2024-11-07 09:50:17.574596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.138 [2024-11-07 09:50:17.574603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.138 [2024-11-07 09:50:17.574699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.138 [2024-11-07 09:50:17.574709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.138 [2024-11-07 09:50:17.574717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.138 [2024-11-07 09:50:17.574724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.139 [2024-11-07 09:50:17.574753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.139 [2024-11-07 09:50:17.574761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:50.139 [2024-11-07 09:50:17.574770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.139 [2024-11-07 09:50:17.574777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.139 [2024-11-07 09:50:17.574809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.139 [2024-11-07 09:50:17.574817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.139 [2024-11-07 09:50:17.574825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.139 [2024-11-07 09:50:17.574832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.139 [2024-11-07 09:50:17.574870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:50.139 [2024-11-07 09:50:17.574879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.139 [2024-11-07 09:50:17.574887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:50.139 [2024-11-07 09:50:17.574894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.139 [2024-11-07 09:50:17.575002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 585.761 ms, result 0 00:21:50.710 00:21:50.710 00:21:50.710 09:50:18 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:53.257 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:53.257 Process with pid 74362 is not found 00:21:53.257 Remove shared memory files 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74362 00:21:53.257 09:50:20 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74362 ']' 00:21:53.257 09:50:20 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74362 00:21:53.257 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74362) - No such process 00:21:53.257 09:50:20 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74362 is not found' 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:53.257 09:50:20 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:53.257 00:21:53.257 real 4m1.910s 00:21:53.257 user 3m52.346s 00:21:53.257 sys 0m10.405s 00:21:53.257 09:50:20 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:53.257 ************************************ 00:21:53.257 09:50:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:53.257 END TEST ftl_restore 00:21:53.257 ************************************ 00:21:53.257 09:50:20 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:53.257 09:50:20 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:53.257 09:50:20 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:53.257 09:50:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:53.257 ************************************ 00:21:53.257 START TEST ftl_dirty_shutdown 00:21:53.257 ************************************ 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:53.257 * Looking for test storage... 00:21:53.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.257 --rc genhtml_branch_coverage=1 00:21:53.257 --rc genhtml_function_coverage=1 00:21:53.257 --rc genhtml_legend=1 00:21:53.257 --rc geninfo_all_blocks=1 00:21:53.257 --rc geninfo_unexecuted_blocks=1 00:21:53.257 00:21:53.257 ' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.257 --rc genhtml_branch_coverage=1 00:21:53.257 --rc genhtml_function_coverage=1 00:21:53.257 --rc genhtml_legend=1 00:21:53.257 --rc geninfo_all_blocks=1 00:21:53.257 --rc geninfo_unexecuted_blocks=1 00:21:53.257 00:21:53.257 ' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.257 --rc genhtml_branch_coverage=1 00:21:53.257 --rc genhtml_function_coverage=1 00:21:53.257 --rc genhtml_legend=1 00:21:53.257 --rc geninfo_all_blocks=1 00:21:53.257 --rc geninfo_unexecuted_blocks=1 00:21:53.257 00:21:53.257 ' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.257 --rc genhtml_branch_coverage=1 00:21:53.257 --rc genhtml_function_coverage=1 00:21:53.257 --rc genhtml_legend=1 00:21:53.257 --rc geninfo_all_blocks=1 00:21:53.257 --rc geninfo_unexecuted_blocks=1 00:21:53.257 00:21:53.257 ' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.257 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76924 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76924 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 76924 ']' 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:53.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:53.258 09:50:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:53.258 [2024-11-07 09:50:20.915692] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:53.258 [2024-11-07 09:50:20.915805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76924 ] 00:21:53.518 [2024-11-07 09:50:21.077395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.518 [2024-11-07 09:50:21.178325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:54.461 09:50:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:54.461 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:54.723 { 00:21:54.723 "name": "nvme0n1", 00:21:54.723 "aliases": [ 00:21:54.723 "7af1ed2a-1f2c-4c8c-8b69-f78bacd6ea23" 00:21:54.723 ], 00:21:54.723 "product_name": "NVMe disk", 00:21:54.723 "block_size": 4096, 00:21:54.723 "num_blocks": 1310720, 00:21:54.723 "uuid": "7af1ed2a-1f2c-4c8c-8b69-f78bacd6ea23", 00:21:54.723 "numa_id": -1, 00:21:54.723 "assigned_rate_limits": { 00:21:54.723 "rw_ios_per_sec": 0, 00:21:54.723 "rw_mbytes_per_sec": 0, 00:21:54.723 "r_mbytes_per_sec": 0, 00:21:54.723 "w_mbytes_per_sec": 0 00:21:54.723 }, 00:21:54.723 "claimed": true, 00:21:54.723 "claim_type": "read_many_write_one", 00:21:54.723 "zoned": false, 00:21:54.723 "supported_io_types": { 00:21:54.723 "read": true, 00:21:54.723 "write": true, 00:21:54.723 "unmap": true, 00:21:54.723 "flush": true, 00:21:54.723 "reset": true, 00:21:54.723 "nvme_admin": true, 00:21:54.723 "nvme_io": true, 00:21:54.723 "nvme_io_md": false, 00:21:54.723 "write_zeroes": true, 00:21:54.723 "zcopy": false, 00:21:54.723 "get_zone_info": false, 00:21:54.723 "zone_management": false, 00:21:54.723 "zone_append": false, 00:21:54.723 "compare": true, 00:21:54.723 "compare_and_write": false, 00:21:54.723 "abort": true, 00:21:54.723 "seek_hole": false, 00:21:54.723 "seek_data": false, 00:21:54.723 "copy": true, 00:21:54.723 "nvme_iov_md": false 00:21:54.723 }, 00:21:54.723 "driver_specific": { 00:21:54.723 "nvme": [ 00:21:54.723 { 00:21:54.723 "pci_address": "0000:00:11.0", 00:21:54.723 "trid": { 00:21:54.723 "trtype": "PCIe", 00:21:54.723 "traddr": "0000:00:11.0" 00:21:54.723 }, 00:21:54.723 "ctrlr_data": { 00:21:54.723 "cntlid": 0, 00:21:54.723 "vendor_id": "0x1b36", 00:21:54.723 "model_number": "QEMU NVMe Ctrl", 00:21:54.723 "serial_number": "12341", 00:21:54.723 "firmware_revision": "8.0.0", 00:21:54.723 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:54.723 "oacs": { 00:21:54.723 "security": 0, 00:21:54.723 "format": 1, 00:21:54.723 "firmware": 0, 00:21:54.723 "ns_manage": 1 00:21:54.723 }, 00:21:54.723 "multi_ctrlr": false, 00:21:54.723 "ana_reporting": false 00:21:54.723 }, 00:21:54.723 "vs": { 00:21:54.723 "nvme_version": "1.4" 00:21:54.723 }, 00:21:54.723 "ns_data": { 00:21:54.723 "id": 1, 00:21:54.723 "can_share": false 00:21:54.723 } 00:21:54.723 } 00:21:54.723 ], 00:21:54.723 "mp_policy": "active_passive" 00:21:54.723 } 00:21:54.723 } 00:21:54.723 ]' 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:54.723 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:54.985 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fe91d909-f58f-4630-9ee5-3a563cd5c40a 00:21:54.985 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:54.985 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe91d909-f58f-4630-9ee5-3a563cd5c40a 00:21:55.246 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:55.524 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e95e26f1-4a0a-4bc8-ad09-596d8025d015 00:21:55.524 09:50:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e95e26f1-4a0a-4bc8-ad09-596d8025d015 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:55.524 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:55.785 { 00:21:55.785 "name": "14d007bf-cd5d-4e22-a959-efbcd2596ea8", 00:21:55.785 "aliases": [ 00:21:55.785 "lvs/nvme0n1p0" 00:21:55.785 ], 00:21:55.785 "product_name": "Logical Volume", 00:21:55.785 "block_size": 4096, 00:21:55.785 "num_blocks": 26476544, 00:21:55.785 "uuid": "14d007bf-cd5d-4e22-a959-efbcd2596ea8", 00:21:55.785 "assigned_rate_limits": { 00:21:55.785 "rw_ios_per_sec": 0, 00:21:55.785 "rw_mbytes_per_sec": 0, 00:21:55.785 "r_mbytes_per_sec": 0, 00:21:55.785 "w_mbytes_per_sec": 0 00:21:55.785 }, 00:21:55.785 "claimed": false, 00:21:55.785 "zoned": false, 00:21:55.785 "supported_io_types": { 00:21:55.785 "read": true, 00:21:55.785 "write": true, 00:21:55.785 "unmap": true, 00:21:55.785 "flush": false, 00:21:55.785 "reset": true, 00:21:55.785 "nvme_admin": false, 00:21:55.785 "nvme_io": false, 00:21:55.785 "nvme_io_md": false, 00:21:55.785 "write_zeroes": true, 00:21:55.785 "zcopy": false, 00:21:55.785 "get_zone_info": false, 00:21:55.785 "zone_management": false, 00:21:55.785 "zone_append": false, 00:21:55.785 "compare": false, 00:21:55.785 "compare_and_write": false, 00:21:55.785 "abort": false, 00:21:55.785 "seek_hole": true, 00:21:55.785 "seek_data": true, 00:21:55.785 "copy": false, 00:21:55.785 "nvme_iov_md": false 00:21:55.785 }, 00:21:55.785 "driver_specific": { 00:21:55.785 "lvol": { 00:21:55.785 "lvol_store_uuid": "e95e26f1-4a0a-4bc8-ad09-596d8025d015", 00:21:55.785 "base_bdev": "nvme0n1", 00:21:55.785 "thin_provision": true, 00:21:55.785 "num_allocated_clusters": 0, 00:21:55.785 "snapshot": false, 00:21:55.785 "clone": false, 00:21:55.785 "esnap_clone": false 00:21:55.785 } 00:21:55.785 } 00:21:55.785 } 00:21:55.785 ]' 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:55.785 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:56.048 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:56.309 { 00:21:56.309 "name": "14d007bf-cd5d-4e22-a959-efbcd2596ea8", 00:21:56.309 "aliases": [ 00:21:56.309 "lvs/nvme0n1p0" 00:21:56.309 ], 00:21:56.309 "product_name": "Logical Volume", 00:21:56.309 "block_size": 4096, 00:21:56.309 "num_blocks": 26476544, 00:21:56.309 "uuid": "14d007bf-cd5d-4e22-a959-efbcd2596ea8", 00:21:56.309 "assigned_rate_limits": { 00:21:56.309 "rw_ios_per_sec": 0, 00:21:56.309 "rw_mbytes_per_sec": 0, 00:21:56.309 "r_mbytes_per_sec": 0, 00:21:56.309 "w_mbytes_per_sec": 0 00:21:56.309 }, 00:21:56.309 "claimed": false, 00:21:56.309 "zoned": false, 00:21:56.309 "supported_io_types": { 00:21:56.309 "read": true, 00:21:56.309 "write": true, 00:21:56.309 "unmap": true, 00:21:56.309 "flush": false, 00:21:56.309 "reset": true, 00:21:56.309 "nvme_admin": false, 00:21:56.309 "nvme_io": false, 00:21:56.309 "nvme_io_md": false, 00:21:56.309 "write_zeroes": true, 00:21:56.309 "zcopy": false, 00:21:56.309 "get_zone_info": false, 00:21:56.309 "zone_management": false, 00:21:56.309 "zone_append": false, 00:21:56.309 "compare": false, 00:21:56.309 "compare_and_write": false, 00:21:56.309 "abort": false, 00:21:56.309 "seek_hole": true, 00:21:56.309 "seek_data": true, 00:21:56.309 "copy": false, 00:21:56.309 "nvme_iov_md": false 00:21:56.309 }, 00:21:56.309 "driver_specific": { 00:21:56.309 "lvol": { 00:21:56.309 "lvol_store_uuid": "e95e26f1-4a0a-4bc8-ad09-596d8025d015", 00:21:56.309 "base_bdev": "nvme0n1", 00:21:56.309 "thin_provision": true, 00:21:56.309 "num_allocated_clusters": 0, 00:21:56.309 "snapshot": false, 00:21:56.309 "clone": false, 00:21:56.309 "esnap_clone": false 00:21:56.309 } 00:21:56.309 } 00:21:56.309 } 00:21:56.309 ]' 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:56.309 09:50:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:21:56.570 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14d007bf-cd5d-4e22-a959-efbcd2596ea8 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:56.831 { 00:21:56.831 "name": "14d007bf-cd5d-4e22-a959-efbcd2596ea8", 00:21:56.831 "aliases": [ 00:21:56.831 "lvs/nvme0n1p0" 00:21:56.831 ], 00:21:56.831 "product_name": "Logical Volume", 00:21:56.831 "block_size": 4096, 00:21:56.831 "num_blocks": 26476544, 00:21:56.831 "uuid": "14d007bf-cd5d-4e22-a959-efbcd2596ea8", 00:21:56.831 "assigned_rate_limits": { 00:21:56.831 "rw_ios_per_sec": 0, 00:21:56.831 "rw_mbytes_per_sec": 0, 00:21:56.831 "r_mbytes_per_sec": 0, 00:21:56.831 "w_mbytes_per_sec": 0 00:21:56.831 }, 00:21:56.831 "claimed": false, 00:21:56.831 "zoned": false, 00:21:56.831 "supported_io_types": { 00:21:56.831 "read": true, 00:21:56.831 "write": true, 00:21:56.831 "unmap": true, 00:21:56.831 "flush": false, 00:21:56.831 "reset": true, 00:21:56.831 "nvme_admin": false, 00:21:56.831 "nvme_io": false, 00:21:56.831 "nvme_io_md": false, 00:21:56.831 "write_zeroes": true, 00:21:56.831 "zcopy": false, 00:21:56.831 "get_zone_info": false, 00:21:56.831 "zone_management": false, 00:21:56.831 "zone_append": false, 00:21:56.831 "compare": false, 00:21:56.831 "compare_and_write": false, 00:21:56.831 "abort": false, 00:21:56.831 "seek_hole": true, 00:21:56.831 "seek_data": true, 00:21:56.831 "copy": false, 00:21:56.831 "nvme_iov_md": false 00:21:56.831 }, 00:21:56.831 "driver_specific": { 00:21:56.831 "lvol": { 00:21:56.831 "lvol_store_uuid": "e95e26f1-4a0a-4bc8-ad09-596d8025d015", 00:21:56.831 "base_bdev": "nvme0n1", 00:21:56.831 "thin_provision": true, 00:21:56.831 "num_allocated_clusters": 0, 00:21:56.831 "snapshot": false, 00:21:56.831 "clone": false, 00:21:56.831 "esnap_clone": false 00:21:56.831 } 00:21:56.831 } 00:21:56.831 } 00:21:56.831 ]' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 14d007bf-cd5d-4e22-a959-efbcd2596ea8 --l2p_dram_limit 10' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:56.831 09:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 14d007bf-cd5d-4e22-a959-efbcd2596ea8 --l2p_dram_limit 10 -c nvc0n1p0 00:21:57.093 [2024-11-07 09:50:24.611119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.611176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:57.093 [2024-11-07 09:50:24.611192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:57.093 [2024-11-07 09:50:24.611200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.611262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.611289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.093 [2024-11-07 09:50:24.611299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:57.093 [2024-11-07 09:50:24.611307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.611332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:57.093 [2024-11-07 09:50:24.612112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:57.093 [2024-11-07 09:50:24.612140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.612148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.093 [2024-11-07 09:50:24.612158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:21:57.093 [2024-11-07 09:50:24.612165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.612199] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4d0892d7-c7ff-4fc8-aad2-a34eb0dcb004 00:21:57.093 [2024-11-07 09:50:24.613288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.613321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:57.093 [2024-11-07 09:50:24.613331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:57.093 [2024-11-07 09:50:24.613340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.618575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.618608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.093 [2024-11-07 09:50:24.618619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:21:57.093 [2024-11-07 09:50:24.618639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.618763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.618775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.093 [2024-11-07 09:50:24.618785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:57.093 [2024-11-07 09:50:24.618796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.618859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.618871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:57.093 [2024-11-07 09:50:24.618878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:57.093 [2024-11-07 09:50:24.618889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.618910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:57.093 [2024-11-07 09:50:24.622476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.622506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.093 [2024-11-07 09:50:24.622518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:21:57.093 [2024-11-07 09:50:24.622525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.622557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.622565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:57.093 [2024-11-07 09:50:24.622575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:57.093 [2024-11-07 09:50:24.622582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.622599] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:57.093 [2024-11-07 09:50:24.622749] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:57.093 [2024-11-07 09:50:24.622765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:57.093 [2024-11-07 09:50:24.622776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:57.093 [2024-11-07 09:50:24.622787] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:57.093 [2024-11-07 09:50:24.622795] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:57.093 [2024-11-07 09:50:24.622804] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:57.093 [2024-11-07 09:50:24.622812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:57.093 [2024-11-07 09:50:24.622823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:57.093 [2024-11-07 09:50:24.622829] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:57.093 [2024-11-07 09:50:24.622838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.622845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:57.093 [2024-11-07 09:50:24.622854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:21:57.093 [2024-11-07 09:50:24.622866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.622963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.093 [2024-11-07 09:50:24.622971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:57.093 [2024-11-07 09:50:24.622979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:57.093 [2024-11-07 09:50:24.622986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.093 [2024-11-07 09:50:24.623108] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:57.093 [2024-11-07 09:50:24.623118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:57.093 [2024-11-07 09:50:24.623127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.093 [2024-11-07 09:50:24.623135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.093 [2024-11-07 09:50:24.623144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:57.093 [2024-11-07 09:50:24.623151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:57.093 [2024-11-07 09:50:24.623159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:57.093 [2024-11-07 09:50:24.623166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:57.093 [2024-11-07 09:50:24.623174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:57.093 [2024-11-07 09:50:24.623180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.093 [2024-11-07 09:50:24.623188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:57.093 [2024-11-07 09:50:24.623195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:57.093 [2024-11-07 09:50:24.623203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.093 [2024-11-07 09:50:24.623209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:57.093 [2024-11-07 09:50:24.623217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:57.093 [2024-11-07 09:50:24.623224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.093 [2024-11-07 09:50:24.623233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:57.093 [2024-11-07 09:50:24.623240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:57.093 [2024-11-07 09:50:24.623249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.093 [2024-11-07 09:50:24.623256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:57.093 [2024-11-07 09:50:24.623264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:57.093 [2024-11-07 09:50:24.623289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.093 [2024-11-07 09:50:24.623298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:57.094 [2024-11-07 09:50:24.623305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.094 [2024-11-07 09:50:24.623321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:57.094 [2024-11-07 09:50:24.623329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.094 [2024-11-07 09:50:24.623343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:57.094 [2024-11-07 09:50:24.623350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.094 [2024-11-07 09:50:24.623364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:57.094 [2024-11-07 09:50:24.623374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.094 [2024-11-07 09:50:24.623388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:57.094 [2024-11-07 09:50:24.623395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:57.094 [2024-11-07 09:50:24.623402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.094 [2024-11-07 09:50:24.623409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:57.094 [2024-11-07 09:50:24.623417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:57.094 [2024-11-07 09:50:24.623423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:57.094 [2024-11-07 09:50:24.623438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:57.094 [2024-11-07 09:50:24.623445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623452] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:57.094 [2024-11-07 09:50:24.623460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:57.094 [2024-11-07 09:50:24.623467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.094 [2024-11-07 09:50:24.623477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.094 [2024-11-07 09:50:24.623485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:57.094 [2024-11-07 09:50:24.623494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:57.094 [2024-11-07 09:50:24.623501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:57.094 [2024-11-07 09:50:24.623509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:57.094 [2024-11-07 09:50:24.623516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:57.094 [2024-11-07 09:50:24.623524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:57.094 [2024-11-07 09:50:24.623533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:57.094 [2024-11-07 09:50:24.623544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:57.094 [2024-11-07 09:50:24.623564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:57.094 [2024-11-07 09:50:24.623572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:57.094 [2024-11-07 09:50:24.623580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:57.094 [2024-11-07 09:50:24.623587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:57.094 [2024-11-07 09:50:24.623596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:57.094 [2024-11-07 09:50:24.623603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:57.094 [2024-11-07 09:50:24.623611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:57.094 [2024-11-07 09:50:24.623618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:57.094 [2024-11-07 09:50:24.623639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:57.094 [2024-11-07 09:50:24.623679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:57.094 [2024-11-07 09:50:24.623688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:57.094 [2024-11-07 09:50:24.623705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:57.094 [2024-11-07 09:50:24.623712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:57.094 [2024-11-07 09:50:24.623721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:57.094 [2024-11-07 09:50:24.623729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.094 [2024-11-07 09:50:24.623738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:57.094 [2024-11-07 09:50:24.623745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:21:57.094 [2024-11-07 09:50:24.623755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.094 [2024-11-07 09:50:24.623793] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:57.094 [2024-11-07 09:50:24.623805] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:00.418 [2024-11-07 09:50:28.077821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.418 [2024-11-07 09:50:28.077882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:00.418 [2024-11-07 09:50:28.077898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3454.015 ms 00:22:00.418 [2024-11-07 09:50:28.077909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-07 09:50:28.103433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-07 09:50:28.103478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.676 [2024-11-07 09:50:28.103491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.320 ms 00:22:00.676 [2024-11-07 09:50:28.103502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-07 09:50:28.103621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-07 09:50:28.103646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:00.676 [2024-11-07 09:50:28.103655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:00.676 [2024-11-07 09:50:28.103666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-07 09:50:28.133890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-07 09:50:28.133926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.676 [2024-11-07 09:50:28.133937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.185 ms 00:22:00.676 [2024-11-07 09:50:28.133945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-07 09:50:28.133973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-07 09:50:28.133987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.676 [2024-11-07 09:50:28.133995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:00.676 [2024-11-07 09:50:28.134004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.676 [2024-11-07 09:50:28.134358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.676 [2024-11-07 09:50:28.134384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.677 [2024-11-07 09:50:28.134394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:22:00.677 [2024-11-07 09:50:28.134403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.134509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.134522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.677 [2024-11-07 09:50:28.134533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:00.677 [2024-11-07 09:50:28.134544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.148368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.148403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.677 [2024-11-07 09:50:28.148412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.805 ms 00:22:00.677 [2024-11-07 09:50:28.148421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.159689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:00.677 [2024-11-07 09:50:28.162263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.162293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:00.677 [2024-11-07 09:50:28.162305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.775 ms 00:22:00.677 [2024-11-07 09:50:28.162313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.231116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.231169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:00.677 [2024-11-07 09:50:28.231184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.772 ms 00:22:00.677 [2024-11-07 09:50:28.231193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.231381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.231394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:00.677 [2024-11-07 09:50:28.231407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:22:00.677 [2024-11-07 09:50:28.231415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.254591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.254634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:00.677 [2024-11-07 09:50:28.254647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.130 ms 00:22:00.677 [2024-11-07 09:50:28.254655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.276522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.276552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:00.677 [2024-11-07 09:50:28.276565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.829 ms 00:22:00.677 [2024-11-07 09:50:28.276572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.277137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.277159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:00.677 [2024-11-07 09:50:28.277169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:22:00.677 [2024-11-07 09:50:28.277176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.677 [2024-11-07 09:50:28.344168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.677 [2024-11-07 09:50:28.344206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:00.677 [2024-11-07 09:50:28.344223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.956 ms 00:22:00.677 [2024-11-07 09:50:28.344231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.934 [2024-11-07 09:50:28.367504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.934 [2024-11-07 09:50:28.367541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:00.934 [2024-11-07 09:50:28.367554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.205 ms 00:22:00.934 [2024-11-07 09:50:28.367562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-07 09:50:28.390451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-07 09:50:28.390486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:00.935 [2024-11-07 09:50:28.390499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.853 ms 00:22:00.935 [2024-11-07 09:50:28.390506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-07 09:50:28.413189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-07 09:50:28.413222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:00.935 [2024-11-07 09:50:28.413235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:22:00.935 [2024-11-07 09:50:28.413242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-07 09:50:28.413280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-07 09:50:28.413288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:00.935 [2024-11-07 09:50:28.413300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:00.935 [2024-11-07 09:50:28.413308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-07 09:50:28.413382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.935 [2024-11-07 09:50:28.413391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:00.935 [2024-11-07 09:50:28.413403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:00.935 [2024-11-07 09:50:28.413410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.935 [2024-11-07 09:50:28.414214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3802.695 ms, result 0 00:22:00.935 { 00:22:00.935 "name": "ftl0", 00:22:00.935 "uuid": "4d0892d7-c7ff-4fc8-aad2-a34eb0dcb004" 00:22:00.935 } 00:22:00.935 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:00.935 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:01.195 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:01.195 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:01.195 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:01.458 /dev/nbd0 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:01.458 1+0 records in 00:22:01.458 1+0 records out 00:22:01.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343901 s, 11.9 MB/s 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:22:01.458 09:50:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:01.458 [2024-11-07 09:50:29.037092] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:01.458 [2024-11-07 09:50:29.037233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77070 ] 00:22:01.720 [2024-11-07 09:50:29.199086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.720 [2024-11-07 09:50:29.326613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.104  [2024-11-07T09:50:31.714Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-07T09:50:32.658Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-07T09:50:33.600Z] Copying: 588/1024 [MB] (196 MBps) [2024-11-07T09:50:35.004Z] Copying: 784/1024 [MB] (195 MBps) [2024-11-07T09:50:35.004Z] Copying: 979/1024 [MB] (195 MBps) [2024-11-07T09:50:35.575Z] Copying: 1024/1024 [MB] (average 195 MBps) 00:22:07.904 00:22:07.904 09:50:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:10.454 09:50:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:10.454 [2024-11-07 09:50:37.857017] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:10.454 [2024-11-07 09:50:37.857173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77164 ] 00:22:10.454 [2024-11-07 09:50:38.021600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.728 [2024-11-07 09:50:38.146745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.111  [2024-11-07T09:50:40.728Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-07T09:50:41.671Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-07T09:50:42.606Z] Copying: 29780/1048576 [kB] (7588 kBps) [2024-11-07T09:50:43.539Z] Copying: 53/1024 [MB] (24 MBps) [2024-11-07T09:50:44.473Z] Copying: 82/1024 [MB] (29 MBps) [2024-11-07T09:50:45.407Z] Copying: 112/1024 [MB] (29 MBps) [2024-11-07T09:50:46.781Z] Copying: 142/1024 [MB] (30 MBps) [2024-11-07T09:50:47.716Z] Copying: 175/1024 [MB] (33 MBps) [2024-11-07T09:50:48.650Z] Copying: 211/1024 [MB] (35 MBps) [2024-11-07T09:50:49.606Z] Copying: 247/1024 [MB] (35 MBps) [2024-11-07T09:50:50.535Z] Copying: 278/1024 [MB] (30 MBps) [2024-11-07T09:50:51.468Z] Copying: 308/1024 [MB] (30 MBps) [2024-11-07T09:50:52.409Z] Copying: 339/1024 [MB] (30 MBps) [2024-11-07T09:50:53.782Z] Copying: 369/1024 [MB] (30 MBps) [2024-11-07T09:50:54.715Z] Copying: 404/1024 [MB] (34 MBps) [2024-11-07T09:50:55.648Z] Copying: 438/1024 [MB] (34 MBps) [2024-11-07T09:50:56.581Z] Copying: 468/1024 [MB] (30 MBps) [2024-11-07T09:50:57.555Z] Copying: 499/1024 [MB] (30 MBps) [2024-11-07T09:50:59.618Z] Copying: 532/1024 [MB] (32 MBps) [2024-11-07T09:50:59.618Z] Copying: 567/1024 [MB] (35 MBps) [2024-11-07T09:51:00.549Z] Copying: 600/1024 [MB] (32 MBps) [2024-11-07T09:51:01.483Z] Copying: 629/1024 [MB] (29 MBps) [2024-11-07T09:51:02.464Z] Copying: 657/1024 [MB] (27 MBps) [2024-11-07T09:51:03.836Z] Copying: 688/1024 [MB] (30 MBps) [2024-11-07T09:51:04.769Z] Copying: 719/1024 [MB] (31 MBps) [2024-11-07T09:51:05.703Z] Copying: 749/1024 [MB] (29 MBps) [2024-11-07T09:51:06.641Z] Copying: 779/1024 [MB] (29 MBps) [2024-11-07T09:51:07.612Z] Copying: 804/1024 [MB] (25 MBps) [2024-11-07T09:51:08.544Z] Copying: 833/1024 [MB] (28 MBps) [2024-11-07T09:51:09.478Z] Copying: 863/1024 [MB] (30 MBps) [2024-11-07T09:51:10.409Z] Copying: 891/1024 [MB] (28 MBps) [2024-11-07T09:51:11.780Z] Copying: 921/1024 [MB] (30 MBps) [2024-11-07T09:51:12.715Z] Copying: 954/1024 [MB] (32 MBps) [2024-11-07T09:51:13.647Z] Copying: 984/1024 [MB] (30 MBps) [2024-11-07T09:51:13.647Z] Copying: 1018/1024 [MB] (33 MBps) [2024-11-07T09:51:14.225Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:22:46.554 00:22:46.554 09:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:46.554 09:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:46.831 09:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:47.094 [2024-11-07 09:51:14.555019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.555070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:47.094 [2024-11-07 09:51:14.555084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:47.094 [2024-11-07 09:51:14.555094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.555118] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.094 [2024-11-07 09:51:14.557714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.557744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:47.094 [2024-11-07 09:51:14.557756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:22:47.094 [2024-11-07 09:51:14.557764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.560235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.560266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:47.094 [2024-11-07 09:51:14.560277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:22:47.094 [2024-11-07 09:51:14.560284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.576349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.576383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:47.094 [2024-11-07 09:51:14.576395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.044 ms 00:22:47.094 [2024-11-07 09:51:14.576403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.582568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.582593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:47.094 [2024-11-07 09:51:14.582605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:22:47.094 [2024-11-07 09:51:14.582613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.606541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.606574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:47.094 [2024-11-07 09:51:14.606587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.855 ms 00:22:47.094 [2024-11-07 09:51:14.606594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.622377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.622416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:47.094 [2024-11-07 09:51:14.622429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.743 ms 00:22:47.094 [2024-11-07 09:51:14.622439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.622584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.622595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:47.094 [2024-11-07 09:51:14.622605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:47.094 [2024-11-07 09:51:14.622612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.646495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.646526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:47.094 [2024-11-07 09:51:14.646539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.863 ms 00:22:47.094 [2024-11-07 09:51:14.646546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.669501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.669533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:47.094 [2024-11-07 09:51:14.669545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.917 ms 00:22:47.094 [2024-11-07 09:51:14.669552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.692041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.692073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:47.094 [2024-11-07 09:51:14.692085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:22:47.094 [2024-11-07 09:51:14.692092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.714587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.094 [2024-11-07 09:51:14.714626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:47.094 [2024-11-07 09:51:14.714645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.421 ms 00:22:47.094 [2024-11-07 09:51:14.714652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.094 [2024-11-07 09:51:14.714687] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:47.094 [2024-11-07 09:51:14.714700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:47.094 [2024-11-07 09:51:14.714712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:47.094 [2024-11-07 09:51:14.714720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:47.094 [2024-11-07 09:51:14.714729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:47.094 [2024-11-07 09:51:14.714737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:47.094 [2024-11-07 09:51:14.714746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:47.094 [2024-11-07 09:51:14.714753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.714995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:47.095 [2024-11-07 09:51:14.715511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:47.096 [2024-11-07 09:51:14.715520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:47.096 [2024-11-07 09:51:14.715528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:47.096 [2024-11-07 09:51:14.715538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:47.096 [2024-11-07 09:51:14.715553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:47.096 [2024-11-07 09:51:14.715563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d0892d7-c7ff-4fc8-aad2-a34eb0dcb004 00:22:47.096 [2024-11-07 09:51:14.715571] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:47.096 [2024-11-07 09:51:14.715581] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:47.096 [2024-11-07 09:51:14.715588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:47.096 [2024-11-07 09:51:14.715599] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:47.096 [2024-11-07 09:51:14.715605] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:47.096 [2024-11-07 09:51:14.715614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:47.096 [2024-11-07 09:51:14.715621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:47.096 [2024-11-07 09:51:14.715643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:47.096 [2024-11-07 09:51:14.715650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:47.096 [2024-11-07 09:51:14.715659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.096 [2024-11-07 09:51:14.715666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:47.096 [2024-11-07 09:51:14.715676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:22:47.096 [2024-11-07 09:51:14.715684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.096 [2024-11-07 09:51:14.728161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.096 [2024-11-07 09:51:14.728190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:47.096 [2024-11-07 09:51:14.728203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.445 ms 00:22:47.096 [2024-11-07 09:51:14.728210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.096 [2024-11-07 09:51:14.728564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.096 [2024-11-07 09:51:14.728580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:47.096 [2024-11-07 09:51:14.728590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:22:47.096 [2024-11-07 09:51:14.728597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.769792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.769831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.357 [2024-11-07 09:51:14.769843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.769851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.769917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.769925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.357 [2024-11-07 09:51:14.769934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.769941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.770042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.770052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.357 [2024-11-07 09:51:14.770063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.770070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.770091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.770099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.357 [2024-11-07 09:51:14.770107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.770114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.846785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.846833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.357 [2024-11-07 09:51:14.846845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.846852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.909512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.909558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.357 [2024-11-07 09:51:14.909571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.909578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.909667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.909677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.357 [2024-11-07 09:51:14.909687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.909697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.909757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.909767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.357 [2024-11-07 09:51:14.909777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.909784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.909871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.909881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.357 [2024-11-07 09:51:14.909890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.909897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.909930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.909939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:47.357 [2024-11-07 09:51:14.909948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.909955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.909992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.910000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.357 [2024-11-07 09:51:14.910009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.910016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.357 [2024-11-07 09:51:14.910062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.357 [2024-11-07 09:51:14.910072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.357 [2024-11-07 09:51:14.910081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.357 [2024-11-07 09:51:14.910088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.358 [2024-11-07 09:51:14.910210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.157 ms, result 0 00:22:47.358 true 00:22:47.358 09:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76924 00:22:47.358 09:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76924 00:22:47.358 09:51:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:47.358 [2024-11-07 09:51:14.998552] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:47.358 [2024-11-07 09:51:14.998672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77554 ] 00:22:47.619 [2024-11-07 09:51:15.157938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.619 [2024-11-07 09:51:15.259336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.998  [2024-11-07T09:51:17.604Z] Copying: 222/1024 [MB] (222 MBps) [2024-11-07T09:51:18.539Z] Copying: 479/1024 [MB] (256 MBps) [2024-11-07T09:51:19.912Z] Copying: 737/1024 [MB] (258 MBps) [2024-11-07T09:51:19.912Z] Copying: 983/1024 [MB] (245 MBps) [2024-11-07T09:51:20.477Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:22:52.806 00:22:52.806 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76924 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:52.806 09:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:52.806 [2024-11-07 09:51:20.280672] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:52.806 [2024-11-07 09:51:20.280817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77613 ] 00:22:52.806 [2024-11-07 09:51:20.443683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.064 [2024-11-07 09:51:20.527743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.323 [2024-11-07 09:51:20.739594] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.323 [2024-11-07 09:51:20.739654] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.323 [2024-11-07 09:51:20.802210] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:53.323 [2024-11-07 09:51:20.802442] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:53.323 [2024-11-07 09:51:20.802550] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:53.323 [2024-11-07 09:51:20.970564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.970611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:53.323 [2024-11-07 09:51:20.970622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.323 [2024-11-07 09:51:20.970639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.970683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.970691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.323 [2024-11-07 09:51:20.970697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:53.323 [2024-11-07 09:51:20.970703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.970718] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:53.323 [2024-11-07 09:51:20.971230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:53.323 [2024-11-07 09:51:20.971249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.971255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.323 [2024-11-07 09:51:20.971262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:22:53.323 [2024-11-07 09:51:20.971268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.972237] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:53.323 [2024-11-07 09:51:20.981966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.982000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:53.323 [2024-11-07 09:51:20.982009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.730 ms 00:22:53.323 [2024-11-07 09:51:20.982016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.982058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.982067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:53.323 [2024-11-07 09:51:20.982073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:53.323 [2024-11-07 09:51:20.982079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.986390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.986415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.323 [2024-11-07 09:51:20.986422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:22:53.323 [2024-11-07 09:51:20.986428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.986481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.986488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.323 [2024-11-07 09:51:20.986494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:53.323 [2024-11-07 09:51:20.986500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.986533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.986542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:53.323 [2024-11-07 09:51:20.986548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:53.323 [2024-11-07 09:51:20.986554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.986569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.323 [2024-11-07 09:51:20.989198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.989222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.323 [2024-11-07 09:51:20.989230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:22:53.323 [2024-11-07 09:51:20.989236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.989262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.989269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:53.323 [2024-11-07 09:51:20.989276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:53.323 [2024-11-07 09:51:20.989282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.989296] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:53.323 [2024-11-07 09:51:20.989312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:53.323 [2024-11-07 09:51:20.989340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:53.323 [2024-11-07 09:51:20.989352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:53.323 [2024-11-07 09:51:20.989433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:53.323 [2024-11-07 09:51:20.989443] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:53.323 [2024-11-07 09:51:20.989451] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:53.323 [2024-11-07 09:51:20.989459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:53.323 [2024-11-07 09:51:20.989468] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:53.323 [2024-11-07 09:51:20.989475] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:53.323 [2024-11-07 09:51:20.989480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:53.323 [2024-11-07 09:51:20.989486] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:53.323 [2024-11-07 09:51:20.989492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:53.323 [2024-11-07 09:51:20.989499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.989504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:53.323 [2024-11-07 09:51:20.989510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:22:53.323 [2024-11-07 09:51:20.989516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.989581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.323 [2024-11-07 09:51:20.989589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:53.323 [2024-11-07 09:51:20.989595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:53.323 [2024-11-07 09:51:20.989601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.323 [2024-11-07 09:51:20.989688] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:53.323 [2024-11-07 09:51:20.989728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:53.323 [2024-11-07 09:51:20.989735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.323 [2024-11-07 09:51:20.989741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.323 [2024-11-07 09:51:20.989747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:53.323 [2024-11-07 09:51:20.989753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:53.323 [2024-11-07 09:51:20.989759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:53.323 [2024-11-07 09:51:20.989765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:53.323 [2024-11-07 09:51:20.989770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:53.323 [2024-11-07 09:51:20.989775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.323 [2024-11-07 09:51:20.989781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:53.323 [2024-11-07 09:51:20.989790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:53.323 [2024-11-07 09:51:20.989795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.323 [2024-11-07 09:51:20.989801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:53.323 [2024-11-07 09:51:20.989807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:53.323 [2024-11-07 09:51:20.989812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.323 [2024-11-07 09:51:20.989817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:53.324 [2024-11-07 09:51:20.989822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:53.324 [2024-11-07 09:51:20.989827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:53.324 [2024-11-07 09:51:20.989837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.324 [2024-11-07 09:51:20.989849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:53.324 [2024-11-07 09:51:20.989854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.324 [2024-11-07 09:51:20.989864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:53.324 [2024-11-07 09:51:20.989869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.324 [2024-11-07 09:51:20.989879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:53.324 [2024-11-07 09:51:20.989884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.324 [2024-11-07 09:51:20.989893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:53.324 [2024-11-07 09:51:20.989898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.324 [2024-11-07 09:51:20.989908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:53.324 [2024-11-07 09:51:20.989913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:53.324 [2024-11-07 09:51:20.989918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.324 [2024-11-07 09:51:20.989923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:53.324 [2024-11-07 09:51:20.989928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:53.324 [2024-11-07 09:51:20.989934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:53.324 [2024-11-07 09:51:20.989944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:53.324 [2024-11-07 09:51:20.989950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989955] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:53.324 [2024-11-07 09:51:20.989961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:53.324 [2024-11-07 09:51:20.989967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.324 [2024-11-07 09:51:20.989974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.324 [2024-11-07 09:51:20.989980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:53.324 [2024-11-07 09:51:20.989986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:53.324 [2024-11-07 09:51:20.989991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:53.324 [2024-11-07 09:51:20.989996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:53.324 [2024-11-07 09:51:20.990001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:53.324 [2024-11-07 09:51:20.990006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:53.324 [2024-11-07 09:51:20.990012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:53.324 [2024-11-07 09:51:20.990020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:53.324 [2024-11-07 09:51:20.990032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:53.324 [2024-11-07 09:51:20.990038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:53.324 [2024-11-07 09:51:20.990043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:53.324 [2024-11-07 09:51:20.990049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:53.324 [2024-11-07 09:51:20.990054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:53.324 [2024-11-07 09:51:20.990060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:53.324 [2024-11-07 09:51:20.990065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:53.324 [2024-11-07 09:51:20.990071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:53.324 [2024-11-07 09:51:20.990076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:53.324 [2024-11-07 09:51:20.990103] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:53.324 [2024-11-07 09:51:20.990109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:53.324 [2024-11-07 09:51:20.990121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:53.324 [2024-11-07 09:51:20.990126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:53.324 [2024-11-07 09:51:20.990133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:53.324 [2024-11-07 09:51:20.990138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.324 [2024-11-07 09:51:20.990144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:53.324 [2024-11-07 09:51:20.990150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:22:53.324 [2024-11-07 09:51:20.990156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.581 [2024-11-07 09:51:21.011229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.011262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.582 [2024-11-07 09:51:21.011272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.041 ms 00:22:53.582 [2024-11-07 09:51:21.011285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.011355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.011364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.582 [2024-11-07 09:51:21.011371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:53.582 [2024-11-07 09:51:21.011378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.048607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.048652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.582 [2024-11-07 09:51:21.048663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.177 ms 00:22:53.582 [2024-11-07 09:51:21.048672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.048720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.048728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.582 [2024-11-07 09:51:21.048735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.582 [2024-11-07 09:51:21.048741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.049075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.049094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.582 [2024-11-07 09:51:21.049103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:22:53.582 [2024-11-07 09:51:21.049109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.049218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.049232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.582 [2024-11-07 09:51:21.049239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:53.582 [2024-11-07 09:51:21.049245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.059771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.059796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.582 [2024-11-07 09:51:21.059805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.508 ms 00:22:53.582 [2024-11-07 09:51:21.059811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.069486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:53.582 [2024-11-07 09:51:21.069516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:53.582 [2024-11-07 09:51:21.069526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.069533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:53.582 [2024-11-07 09:51:21.069540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.642 ms 00:22:53.582 [2024-11-07 09:51:21.069546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.088699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.088740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:53.582 [2024-11-07 09:51:21.088758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.118 ms 00:22:53.582 [2024-11-07 09:51:21.088765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.098269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.098301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:53.582 [2024-11-07 09:51:21.098309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.454 ms 00:22:53.582 [2024-11-07 09:51:21.098315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.106984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.107011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:53.582 [2024-11-07 09:51:21.107019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.641 ms 00:22:53.582 [2024-11-07 09:51:21.107025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.107523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.107539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.582 [2024-11-07 09:51:21.107546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:22:53.582 [2024-11-07 09:51:21.107552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.151637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.151677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:53.582 [2024-11-07 09:51:21.151689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.070 ms 00:22:53.582 [2024-11-07 09:51:21.151695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.160111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:53.582 [2024-11-07 09:51:21.162416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.162442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.582 [2024-11-07 09:51:21.162452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.670 ms 00:22:53.582 [2024-11-07 09:51:21.162459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.162534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.162543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:53.582 [2024-11-07 09:51:21.162550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:53.582 [2024-11-07 09:51:21.162556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.162620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.162641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.582 [2024-11-07 09:51:21.162648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:53.582 [2024-11-07 09:51:21.162653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.162669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.162678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.582 [2024-11-07 09:51:21.162685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.582 [2024-11-07 09:51:21.162691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.162716] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:53.582 [2024-11-07 09:51:21.162729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.162736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:53.582 [2024-11-07 09:51:21.162742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:53.582 [2024-11-07 09:51:21.162748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.180993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.181021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.582 [2024-11-07 09:51:21.181030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.226 ms 00:22:53.582 [2024-11-07 09:51:21.181037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.181098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.582 [2024-11-07 09:51:21.181106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.582 [2024-11-07 09:51:21.181112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:53.582 [2024-11-07 09:51:21.181118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.582 [2024-11-07 09:51:21.181952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.037 ms, result 0 00:22:54.557  [2024-11-07T09:51:23.600Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-07T09:51:24.534Z] Copying: 93/1024 [MB] (46 MBps) [2024-11-07T09:51:25.476Z] Copying: 139/1024 [MB] (45 MBps) [2024-11-07T09:51:26.416Z] Copying: 179/1024 [MB] (39 MBps) [2024-11-07T09:51:27.358Z] Copying: 206/1024 [MB] (27 MBps) [2024-11-07T09:51:28.301Z] Copying: 222/1024 [MB] (15 MBps) [2024-11-07T09:51:29.245Z] Copying: 239/1024 [MB] (17 MBps) [2024-11-07T09:51:30.200Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-07T09:51:31.595Z] Copying: 270/1024 [MB] (20 MBps) [2024-11-07T09:51:32.540Z] Copying: 303/1024 [MB] (32 MBps) [2024-11-07T09:51:33.481Z] Copying: 320/1024 [MB] (17 MBps) [2024-11-07T09:51:34.423Z] Copying: 340/1024 [MB] (19 MBps) [2024-11-07T09:51:35.365Z] Copying: 368/1024 [MB] (27 MBps) [2024-11-07T09:51:36.326Z] Copying: 394/1024 [MB] (26 MBps) [2024-11-07T09:51:37.258Z] Copying: 419/1024 [MB] (25 MBps) [2024-11-07T09:51:38.198Z] Copying: 457/1024 [MB] (37 MBps) [2024-11-07T09:51:39.582Z] Copying: 480/1024 [MB] (23 MBps) [2024-11-07T09:51:40.529Z] Copying: 502/1024 [MB] (22 MBps) [2024-11-07T09:51:41.469Z] Copying: 529/1024 [MB] (26 MBps) [2024-11-07T09:51:42.413Z] Copying: 541/1024 [MB] (12 MBps) [2024-11-07T09:51:43.354Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-07T09:51:44.298Z] Copying: 567/1024 [MB] (14 MBps) [2024-11-07T09:51:45.241Z] Copying: 585/1024 [MB] (18 MBps) [2024-11-07T09:51:46.628Z] Copying: 604/1024 [MB] (19 MBps) [2024-11-07T09:51:47.200Z] Copying: 625/1024 [MB] (20 MBps) [2024-11-07T09:51:48.620Z] Copying: 646/1024 [MB] (20 MBps) [2024-11-07T09:51:49.565Z] Copying: 662/1024 [MB] (16 MBps) [2024-11-07T09:51:50.507Z] Copying: 675/1024 [MB] (12 MBps) [2024-11-07T09:51:51.449Z] Copying: 690/1024 [MB] (14 MBps) [2024-11-07T09:51:52.389Z] Copying: 702/1024 [MB] (12 MBps) [2024-11-07T09:51:53.332Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-07T09:51:54.271Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-07T09:51:55.208Z] Copying: 744/1024 [MB] (20 MBps) [2024-11-07T09:51:56.591Z] Copying: 778/1024 [MB] (34 MBps) [2024-11-07T09:51:57.533Z] Copying: 804/1024 [MB] (26 MBps) [2024-11-07T09:51:58.477Z] Copying: 822/1024 [MB] (18 MBps) [2024-11-07T09:51:59.430Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-07T09:52:00.370Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-07T09:52:01.312Z] Copying: 859/1024 [MB] (14 MBps) [2024-11-07T09:52:02.252Z] Copying: 874/1024 [MB] (14 MBps) [2024-11-07T09:52:03.667Z] Copying: 888/1024 [MB] (14 MBps) [2024-11-07T09:52:04.263Z] Copying: 917/1024 [MB] (28 MBps) [2024-11-07T09:52:05.204Z] Copying: 932/1024 [MB] (15 MBps) [2024-11-07T09:52:06.587Z] Copying: 947/1024 [MB] (14 MBps) [2024-11-07T09:52:07.534Z] Copying: 983/1024 [MB] (36 MBps) [2024-11-07T09:52:08.481Z] Copying: 1009/1024 [MB] (25 MBps) [2024-11-07T09:52:09.483Z] Copying: 1043952/1048576 [kB] (10096 kBps) [2024-11-07T09:52:09.483Z] Copying: 1048296/1048576 [kB] (4344 kBps) [2024-11-07T09:52:09.483Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-07 09:52:09.474164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.812 [2024-11-07 09:52:09.474248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:41.812 [2024-11-07 09:52:09.474267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:41.812 [2024-11-07 09:52:09.474278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.812 [2024-11-07 09:52:09.474302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:41.812 [2024-11-07 09:52:09.477422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.812 [2024-11-07 09:52:09.477470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:41.812 [2024-11-07 09:52:09.477482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:23:41.812 [2024-11-07 09:52:09.477491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.075 [2024-11-07 09:52:09.490755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.075 [2024-11-07 09:52:09.490806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:42.075 [2024-11-07 09:52:09.490819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.599 ms 00:23:42.075 [2024-11-07 09:52:09.490828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.075 [2024-11-07 09:52:09.513554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.075 [2024-11-07 09:52:09.513603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:42.075 [2024-11-07 09:52:09.513615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.707 ms 00:23:42.075 [2024-11-07 09:52:09.513624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.075 [2024-11-07 09:52:09.519782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.075 [2024-11-07 09:52:09.519830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:42.075 [2024-11-07 09:52:09.519842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:23:42.075 [2024-11-07 09:52:09.519850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.075 [2024-11-07 09:52:09.546494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.075 [2024-11-07 09:52:09.546559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:42.075 [2024-11-07 09:52:09.546573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.598 ms 00:23:42.075 [2024-11-07 09:52:09.546581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.075 [2024-11-07 09:52:09.562813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.075 [2024-11-07 09:52:09.562862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:42.075 [2024-11-07 09:52:09.562877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.184 ms 00:23:42.075 [2024-11-07 09:52:09.562886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.075 [2024-11-07 09:52:09.730388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.075 [2024-11-07 09:52:09.730502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:42.075 [2024-11-07 09:52:09.730521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.444 ms 00:23:42.075 [2024-11-07 09:52:09.730543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.337 [2024-11-07 09:52:09.760295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.337 [2024-11-07 09:52:09.760363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:42.337 [2024-11-07 09:52:09.760377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.732 ms 00:23:42.337 [2024-11-07 09:52:09.760386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.337 [2024-11-07 09:52:09.787344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.337 [2024-11-07 09:52:09.787404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:42.337 [2024-11-07 09:52:09.787417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.904 ms 00:23:42.337 [2024-11-07 09:52:09.787426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.337 [2024-11-07 09:52:09.814594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.337 [2024-11-07 09:52:09.814666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:42.337 [2024-11-07 09:52:09.814679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.116 ms 00:23:42.337 [2024-11-07 09:52:09.814688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.337 [2024-11-07 09:52:09.840323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.337 [2024-11-07 09:52:09.840375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:42.337 [2024-11-07 09:52:09.840390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.541 ms 00:23:42.337 [2024-11-07 09:52:09.840398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.337 [2024-11-07 09:52:09.840444] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:42.337 [2024-11-07 09:52:09.840462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 98304 / 261120 wr_cnt: 1 state: open 00:23:42.337 [2024-11-07 09:52:09.840474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.840999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.841007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:42.337 [2024-11-07 09:52:09.841014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:42.338 [2024-11-07 09:52:09.841323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:42.338 [2024-11-07 09:52:09.841332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d0892d7-c7ff-4fc8-aad2-a34eb0dcb004 00:23:42.338 [2024-11-07 09:52:09.841342] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 98304 00:23:42.338 [2024-11-07 09:52:09.841356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 99264 00:23:42.338 [2024-11-07 09:52:09.841384] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 98304 00:23:42.338 [2024-11-07 09:52:09.841394] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0098 00:23:42.338 [2024-11-07 09:52:09.841402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:42.338 [2024-11-07 09:52:09.841410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:42.338 [2024-11-07 09:52:09.841418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:42.338 [2024-11-07 09:52:09.841426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:42.338 [2024-11-07 09:52:09.841435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:42.338 [2024-11-07 09:52:09.841443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.338 [2024-11-07 09:52:09.841453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:42.338 [2024-11-07 09:52:09.841461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:23:42.338 [2024-11-07 09:52:09.841470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.855157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.338 [2024-11-07 09:52:09.855206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:42.338 [2024-11-07 09:52:09.855219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.665 ms 00:23:42.338 [2024-11-07 09:52:09.855228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.855667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.338 [2024-11-07 09:52:09.855728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:42.338 [2024-11-07 09:52:09.855739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:23:42.338 [2024-11-07 09:52:09.855747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.892595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.338 [2024-11-07 09:52:09.892670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.338 [2024-11-07 09:52:09.892682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.338 [2024-11-07 09:52:09.892699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.892775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.338 [2024-11-07 09:52:09.892785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.338 [2024-11-07 09:52:09.892794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.338 [2024-11-07 09:52:09.892802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.892907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.338 [2024-11-07 09:52:09.892919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.338 [2024-11-07 09:52:09.892928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.338 [2024-11-07 09:52:09.892936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.892953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.338 [2024-11-07 09:52:09.892962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.338 [2024-11-07 09:52:09.892971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.338 [2024-11-07 09:52:09.892979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.338 [2024-11-07 09:52:09.979183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.338 [2024-11-07 09:52:09.979255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.338 [2024-11-07 09:52:09.979269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.338 [2024-11-07 09:52:09.979278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.599 [2024-11-07 09:52:10.050414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.599 [2024-11-07 09:52:10.050524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.599 [2024-11-07 09:52:10.050614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.599 [2024-11-07 09:52:10.050768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:42.599 [2024-11-07 09:52:10.050828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.599 [2024-11-07 09:52:10.050898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.050952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:42.599 [2024-11-07 09:52:10.050963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.599 [2024-11-07 09:52:10.050972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:42.599 [2024-11-07 09:52:10.050980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.599 [2024-11-07 09:52:10.051114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 576.914 ms, result 0 00:23:43.985 00:23:43.985 00:23:43.985 09:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:45.905 09:52:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:46.168 [2024-11-07 09:52:13.601146] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:23:46.168 [2024-11-07 09:52:13.601300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78154 ] 00:23:46.168 [2024-11-07 09:52:13.766413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.430 [2024-11-07 09:52:13.893918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.692 [2024-11-07 09:52:14.186438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:46.692 [2024-11-07 09:52:14.186524] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:46.692 [2024-11-07 09:52:14.349136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.692 [2024-11-07 09:52:14.349206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:46.692 [2024-11-07 09:52:14.349227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:46.692 [2024-11-07 09:52:14.349236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.692 [2024-11-07 09:52:14.349291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.692 [2024-11-07 09:52:14.349302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.692 [2024-11-07 09:52:14.349314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:46.692 [2024-11-07 09:52:14.349322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.692 [2024-11-07 09:52:14.349341] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:46.692 [2024-11-07 09:52:14.350061] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:46.692 [2024-11-07 09:52:14.350090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.692 [2024-11-07 09:52:14.350098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.692 [2024-11-07 09:52:14.350108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:23:46.692 [2024-11-07 09:52:14.350116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.692 [2024-11-07 09:52:14.351847] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:46.955 [2024-11-07 09:52:14.365967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.366018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:46.955 [2024-11-07 09:52:14.366033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.122 ms 00:23:46.955 [2024-11-07 09:52:14.366042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.366119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.366129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:46.955 [2024-11-07 09:52:14.366137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:46.955 [2024-11-07 09:52:14.366146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.374137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.374180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.955 [2024-11-07 09:52:14.374192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.915 ms 00:23:46.955 [2024-11-07 09:52:14.374202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.374286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.374295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.955 [2024-11-07 09:52:14.374304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:46.955 [2024-11-07 09:52:14.374312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.374356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.374373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:46.955 [2024-11-07 09:52:14.374382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:46.955 [2024-11-07 09:52:14.374390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.374414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:46.955 [2024-11-07 09:52:14.378504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.378543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.955 [2024-11-07 09:52:14.378554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:23:46.955 [2024-11-07 09:52:14.378565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.378610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.955 [2024-11-07 09:52:14.378619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:46.955 [2024-11-07 09:52:14.378641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:46.955 [2024-11-07 09:52:14.378649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.955 [2024-11-07 09:52:14.378700] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:46.955 [2024-11-07 09:52:14.378722] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:46.955 [2024-11-07 09:52:14.378773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:46.955 [2024-11-07 09:52:14.378795] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:46.955 [2024-11-07 09:52:14.378902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:46.955 [2024-11-07 09:52:14.378922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:46.956 [2024-11-07 09:52:14.378933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:46.956 [2024-11-07 09:52:14.378945] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:46.956 [2024-11-07 09:52:14.378954] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:46.956 [2024-11-07 09:52:14.378963] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:46.956 [2024-11-07 09:52:14.378971] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:46.956 [2024-11-07 09:52:14.378980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:46.956 [2024-11-07 09:52:14.378987] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:46.956 [2024-11-07 09:52:14.378999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.956 [2024-11-07 09:52:14.379007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:46.956 [2024-11-07 09:52:14.379015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:46.956 [2024-11-07 09:52:14.379022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.956 [2024-11-07 09:52:14.379109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.956 [2024-11-07 09:52:14.379145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:46.956 [2024-11-07 09:52:14.379154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:46.956 [2024-11-07 09:52:14.379161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.956 [2024-11-07 09:52:14.379268] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:46.956 [2024-11-07 09:52:14.379297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:46.956 [2024-11-07 09:52:14.379307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:46.956 [2024-11-07 09:52:14.379330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:46.956 [2024-11-07 09:52:14.379351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:46.956 [2024-11-07 09:52:14.379365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:46.956 [2024-11-07 09:52:14.379373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:46.956 [2024-11-07 09:52:14.379381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:46.956 [2024-11-07 09:52:14.379389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:46.956 [2024-11-07 09:52:14.379396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:46.956 [2024-11-07 09:52:14.379409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:46.956 [2024-11-07 09:52:14.379423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:46.956 [2024-11-07 09:52:14.379442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:46.956 [2024-11-07 09:52:14.379462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:46.956 [2024-11-07 09:52:14.379481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:46.956 [2024-11-07 09:52:14.379502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:46.956 [2024-11-07 09:52:14.379523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:46.956 [2024-11-07 09:52:14.379537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:46.956 [2024-11-07 09:52:14.379544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:46.956 [2024-11-07 09:52:14.379551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:46.956 [2024-11-07 09:52:14.379558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:46.956 [2024-11-07 09:52:14.379566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:46.956 [2024-11-07 09:52:14.379572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:46.956 [2024-11-07 09:52:14.379586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:46.956 [2024-11-07 09:52:14.379592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379598] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:46.956 [2024-11-07 09:52:14.379609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:46.956 [2024-11-07 09:52:14.379617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.956 [2024-11-07 09:52:14.379656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:46.956 [2024-11-07 09:52:14.379664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:46.956 [2024-11-07 09:52:14.379671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:46.956 [2024-11-07 09:52:14.379679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:46.956 [2024-11-07 09:52:14.379686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:46.956 [2024-11-07 09:52:14.379694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:46.956 [2024-11-07 09:52:14.379703] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:46.956 [2024-11-07 09:52:14.379713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:46.956 [2024-11-07 09:52:14.379722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:46.956 [2024-11-07 09:52:14.379730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:46.956 [2024-11-07 09:52:14.379737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:46.956 [2024-11-07 09:52:14.379744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:46.956 [2024-11-07 09:52:14.379751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:46.956 [2024-11-07 09:52:14.379759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:46.956 [2024-11-07 09:52:14.379766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:46.956 [2024-11-07 09:52:14.379773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:46.956 [2024-11-07 09:52:14.379781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:46.956 [2024-11-07 09:52:14.379790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:46.956 [2024-11-07 09:52:14.379798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:46.957 [2024-11-07 09:52:14.379805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:46.957 [2024-11-07 09:52:14.379813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:46.957 [2024-11-07 09:52:14.379821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:46.957 [2024-11-07 09:52:14.379829] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:46.957 [2024-11-07 09:52:14.379840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:46.957 [2024-11-07 09:52:14.379849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:46.957 [2024-11-07 09:52:14.379857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:46.957 [2024-11-07 09:52:14.379865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:46.957 [2024-11-07 09:52:14.379873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:46.957 [2024-11-07 09:52:14.379881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.379890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:46.957 [2024-11-07 09:52:14.379898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:23:46.957 [2024-11-07 09:52:14.379906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.411777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.411822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.957 [2024-11-07 09:52:14.411835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.823 ms 00:23:46.957 [2024-11-07 09:52:14.411845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.411936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.411946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:46.957 [2024-11-07 09:52:14.411956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:46.957 [2024-11-07 09:52:14.411965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.468695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.468751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.957 [2024-11-07 09:52:14.468765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.668 ms 00:23:46.957 [2024-11-07 09:52:14.468774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.468826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.468837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.957 [2024-11-07 09:52:14.468846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:46.957 [2024-11-07 09:52:14.468858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.469467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.469505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.957 [2024-11-07 09:52:14.469518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:23:46.957 [2024-11-07 09:52:14.469527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.469705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.469729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.957 [2024-11-07 09:52:14.469738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:23:46.957 [2024-11-07 09:52:14.469750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.485412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.485458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.957 [2024-11-07 09:52:14.485472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.640 ms 00:23:46.957 [2024-11-07 09:52:14.485481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.499579] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:46.957 [2024-11-07 09:52:14.499643] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:46.957 [2024-11-07 09:52:14.499658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.499667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:46.957 [2024-11-07 09:52:14.499677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.071 ms 00:23:46.957 [2024-11-07 09:52:14.499684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.525574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.525639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:46.957 [2024-11-07 09:52:14.525652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.742 ms 00:23:46.957 [2024-11-07 09:52:14.525662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.538504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.538574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:46.957 [2024-11-07 09:52:14.538586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.789 ms 00:23:46.957 [2024-11-07 09:52:14.538594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.550980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.551027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:46.957 [2024-11-07 09:52:14.551040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.330 ms 00:23:46.957 [2024-11-07 09:52:14.551047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.551753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.551782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:46.957 [2024-11-07 09:52:14.551795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:23:46.957 [2024-11-07 09:52:14.551808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.957 [2024-11-07 09:52:14.616089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.957 [2024-11-07 09:52:14.616161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:46.957 [2024-11-07 09:52:14.616185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.259 ms 00:23:46.957 [2024-11-07 09:52:14.616194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.217 [2024-11-07 09:52:14.627834] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:47.217 [2024-11-07 09:52:14.631460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.217 [2024-11-07 09:52:14.631505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:47.217 [2024-11-07 09:52:14.631518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.204 ms 00:23:47.217 [2024-11-07 09:52:14.631528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.217 [2024-11-07 09:52:14.631652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.217 [2024-11-07 09:52:14.631665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:47.217 [2024-11-07 09:52:14.631675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:47.217 [2024-11-07 09:52:14.631687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.217 [2024-11-07 09:52:14.633401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.217 [2024-11-07 09:52:14.633446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:47.218 [2024-11-07 09:52:14.633457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:23:47.218 [2024-11-07 09:52:14.633467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.218 [2024-11-07 09:52:14.633503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.218 [2024-11-07 09:52:14.633514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:47.218 [2024-11-07 09:52:14.633524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:47.218 [2024-11-07 09:52:14.633532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.218 [2024-11-07 09:52:14.633573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:47.218 [2024-11-07 09:52:14.633587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.218 [2024-11-07 09:52:14.633597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:47.218 [2024-11-07 09:52:14.633606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:47.218 [2024-11-07 09:52:14.633615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.218 [2024-11-07 09:52:14.659514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.218 [2024-11-07 09:52:14.659564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:47.218 [2024-11-07 09:52:14.659578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.864 ms 00:23:47.218 [2024-11-07 09:52:14.659592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.218 [2024-11-07 09:52:14.659693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.218 [2024-11-07 09:52:14.659704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:47.218 [2024-11-07 09:52:14.659714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:47.218 [2024-11-07 09:52:14.659723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.218 [2024-11-07 09:52:14.660961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.310 ms, result 0 00:23:48.601  [2024-11-07T09:52:17.262Z] Copying: 988/1048576 [kB] (988 kBps) [2024-11-07T09:52:18.203Z] Copying: 3924/1048576 [kB] (2936 kBps) [2024-11-07T09:52:19.144Z] Copying: 19/1024 [MB] (16 MBps) [2024-11-07T09:52:20.093Z] Copying: 39/1024 [MB] (19 MBps) [2024-11-07T09:52:21.032Z] Copying: 60/1024 [MB] (21 MBps) [2024-11-07T09:52:21.974Z] Copying: 79/1024 [MB] (19 MBps) [2024-11-07T09:52:22.913Z] Copying: 102/1024 [MB] (22 MBps) [2024-11-07T09:52:23.854Z] Copying: 131/1024 [MB] (29 MBps) [2024-11-07T09:52:25.242Z] Copying: 150/1024 [MB] (18 MBps) [2024-11-07T09:52:26.179Z] Copying: 168/1024 [MB] (18 MBps) [2024-11-07T09:52:27.117Z] Copying: 195/1024 [MB] (26 MBps) [2024-11-07T09:52:28.059Z] Copying: 235/1024 [MB] (39 MBps) [2024-11-07T09:52:29.002Z] Copying: 257/1024 [MB] (22 MBps) [2024-11-07T09:52:29.935Z] Copying: 281/1024 [MB] (23 MBps) [2024-11-07T09:52:30.867Z] Copying: 325/1024 [MB] (44 MBps) [2024-11-07T09:52:32.282Z] Copying: 375/1024 [MB] (49 MBps) [2024-11-07T09:52:32.848Z] Copying: 426/1024 [MB] (50 MBps) [2024-11-07T09:52:34.220Z] Copying: 475/1024 [MB] (49 MBps) [2024-11-07T09:52:35.154Z] Copying: 524/1024 [MB] (49 MBps) [2024-11-07T09:52:36.086Z] Copying: 575/1024 [MB] (51 MBps) [2024-11-07T09:52:37.018Z] Copying: 626/1024 [MB] (50 MBps) [2024-11-07T09:52:37.958Z] Copying: 676/1024 [MB] (49 MBps) [2024-11-07T09:52:38.893Z] Copying: 726/1024 [MB] (50 MBps) [2024-11-07T09:52:40.265Z] Copying: 779/1024 [MB] (53 MBps) [2024-11-07T09:52:41.199Z] Copying: 837/1024 [MB] (57 MBps) [2024-11-07T09:52:42.134Z] Copying: 891/1024 [MB] (54 MBps) [2024-11-07T09:52:43.068Z] Copying: 945/1024 [MB] (53 MBps) [2024-11-07T09:52:43.326Z] Copying: 999/1024 [MB] (53 MBps) [2024-11-07T09:52:43.893Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-07 09:52:43.685455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.685541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:16.222 [2024-11-07 09:52:43.685581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:16.222 [2024-11-07 09:52:43.685598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.685649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:16.222 [2024-11-07 09:52:43.690116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.690161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:16.222 [2024-11-07 09:52:43.690178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:24:16.222 [2024-11-07 09:52:43.690193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.690570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.690602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:16.222 [2024-11-07 09:52:43.690641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:16.222 [2024-11-07 09:52:43.690656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.701724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.701758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:16.222 [2024-11-07 09:52:43.701769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.044 ms 00:24:16.222 [2024-11-07 09:52:43.701776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.707904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.707931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:16.222 [2024-11-07 09:52:43.707940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.103 ms 00:24:16.222 [2024-11-07 09:52:43.707952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.730401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.730433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:16.222 [2024-11-07 09:52:43.730443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.406 ms 00:24:16.222 [2024-11-07 09:52:43.730451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.744504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.744535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:16.222 [2024-11-07 09:52:43.744546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.023 ms 00:24:16.222 [2024-11-07 09:52:43.744553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.745848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.745877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:16.222 [2024-11-07 09:52:43.745886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:24:16.222 [2024-11-07 09:52:43.745893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.768471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.222 [2024-11-07 09:52:43.768499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:16.222 [2024-11-07 09:52:43.768509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.559 ms 00:24:16.222 [2024-11-07 09:52:43.768516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.222 [2024-11-07 09:52:43.790506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.223 [2024-11-07 09:52:43.790536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:16.223 [2024-11-07 09:52:43.790553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.961 ms 00:24:16.223 [2024-11-07 09:52:43.790560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.223 [2024-11-07 09:52:43.812752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.223 [2024-11-07 09:52:43.812783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:16.223 [2024-11-07 09:52:43.812792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.163 ms 00:24:16.223 [2024-11-07 09:52:43.812799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.223 [2024-11-07 09:52:43.834853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.223 [2024-11-07 09:52:43.834880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:16.223 [2024-11-07 09:52:43.834890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.003 ms 00:24:16.223 [2024-11-07 09:52:43.834897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.223 [2024-11-07 09:52:43.834924] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:16.223 [2024-11-07 09:52:43.834938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:16.223 [2024-11-07 09:52:43.834948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:16.223 [2024-11-07 09:52:43.834956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.834964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.834971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.834979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.834987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.834994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:16.223 [2024-11-07 09:52:43.835538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:16.224 [2024-11-07 09:52:43.835708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:16.224 [2024-11-07 09:52:43.835716] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d0892d7-c7ff-4fc8-aad2-a34eb0dcb004 00:24:16.224 [2024-11-07 09:52:43.835724] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:16.224 [2024-11-07 09:52:43.835731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 166336 00:24:16.224 [2024-11-07 09:52:43.835737] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 164352 00:24:16.224 [2024-11-07 09:52:43.835749] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0121 00:24:16.224 [2024-11-07 09:52:43.835755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:16.224 [2024-11-07 09:52:43.835763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:16.224 [2024-11-07 09:52:43.835770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:16.224 [2024-11-07 09:52:43.835781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:16.224 [2024-11-07 09:52:43.835787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:16.224 [2024-11-07 09:52:43.835794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.224 [2024-11-07 09:52:43.835804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:16.224 [2024-11-07 09:52:43.835812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:24:16.224 [2024-11-07 09:52:43.835819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.224 [2024-11-07 09:52:43.847939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.224 [2024-11-07 09:52:43.847969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:16.224 [2024-11-07 09:52:43.847979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.105 ms 00:24:16.224 [2024-11-07 09:52:43.847987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.224 [2024-11-07 09:52:43.848306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.224 [2024-11-07 09:52:43.848314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:16.224 [2024-11-07 09:52:43.848322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:24:16.224 [2024-11-07 09:52:43.848328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.224 [2024-11-07 09:52:43.880397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.224 [2024-11-07 09:52:43.880426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.224 [2024-11-07 09:52:43.880436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.224 [2024-11-07 09:52:43.880443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.224 [2024-11-07 09:52:43.880489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.224 [2024-11-07 09:52:43.880496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.224 [2024-11-07 09:52:43.880504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.224 [2024-11-07 09:52:43.880512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.224 [2024-11-07 09:52:43.880565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.224 [2024-11-07 09:52:43.880580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.224 [2024-11-07 09:52:43.880587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.224 [2024-11-07 09:52:43.880594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.224 [2024-11-07 09:52:43.880608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.224 [2024-11-07 09:52:43.880615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.224 [2024-11-07 09:52:43.880623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.224 [2024-11-07 09:52:43.880641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:43.956791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:43.956835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.483 [2024-11-07 09:52:43.956846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:43.956853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.483 [2024-11-07 09:52:44.019157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.483 [2024-11-07 09:52:44.019247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.483 [2024-11-07 09:52:44.019311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.483 [2024-11-07 09:52:44.019417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:16.483 [2024-11-07 09:52:44.019469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.483 [2024-11-07 09:52:44.019523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.483 [2024-11-07 09:52:44.019579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.483 [2024-11-07 09:52:44.019586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.483 [2024-11-07 09:52:44.019593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.483 [2024-11-07 09:52:44.019716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.254 ms, result 0 00:24:17.054 00:24:17.054 00:24:17.054 09:52:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:19.588 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:19.588 09:52:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:19.588 [2024-11-07 09:52:46.755051] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:24:19.588 [2024-11-07 09:52:46.755142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78501 ] 00:24:19.588 [2024-11-07 09:52:46.909236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.588 [2024-11-07 09:52:47.003978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.588 [2024-11-07 09:52:47.252417] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:19.588 [2024-11-07 09:52:47.252478] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:19.848 [2024-11-07 09:52:47.405762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.405906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:19.848 [2024-11-07 09:52:47.405931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:19.848 [2024-11-07 09:52:47.405940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.405987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.405997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:19.848 [2024-11-07 09:52:47.406007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:19.848 [2024-11-07 09:52:47.406014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.406033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:19.848 [2024-11-07 09:52:47.406691] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:19.848 [2024-11-07 09:52:47.406706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.406713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:19.848 [2024-11-07 09:52:47.406721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:24:19.848 [2024-11-07 09:52:47.406728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.407741] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:19.848 [2024-11-07 09:52:47.419687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.419718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:19.848 [2024-11-07 09:52:47.419730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.947 ms 00:24:19.848 [2024-11-07 09:52:47.419738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.419794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.419804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:19.848 [2024-11-07 09:52:47.419812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:19.848 [2024-11-07 09:52:47.419820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.424365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.424393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:19.848 [2024-11-07 09:52:47.424402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.491 ms 00:24:19.848 [2024-11-07 09:52:47.424409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.424546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.424555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:19.848 [2024-11-07 09:52:47.424563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:24:19.848 [2024-11-07 09:52:47.424570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.424609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.424618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:19.848 [2024-11-07 09:52:47.424644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:19.848 [2024-11-07 09:52:47.424652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.424672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:19.848 [2024-11-07 09:52:47.427846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.427871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:19.848 [2024-11-07 09:52:47.427880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:24:19.848 [2024-11-07 09:52:47.427890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.427916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.427924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:19.848 [2024-11-07 09:52:47.427932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:19.848 [2024-11-07 09:52:47.427939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.427957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:19.848 [2024-11-07 09:52:47.427974] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:19.848 [2024-11-07 09:52:47.428007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:19.848 [2024-11-07 09:52:47.428024] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:19.848 [2024-11-07 09:52:47.428123] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:19.848 [2024-11-07 09:52:47.428134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:19.848 [2024-11-07 09:52:47.428144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:19.848 [2024-11-07 09:52:47.428154] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428162] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428170] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:19.848 [2024-11-07 09:52:47.428177] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:19.848 [2024-11-07 09:52:47.428184] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:19.848 [2024-11-07 09:52:47.428191] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:19.848 [2024-11-07 09:52:47.428200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.428208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:19.848 [2024-11-07 09:52:47.428215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:24:19.848 [2024-11-07 09:52:47.428222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.428303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.848 [2024-11-07 09:52:47.428311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:19.848 [2024-11-07 09:52:47.428318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:19.848 [2024-11-07 09:52:47.428325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.848 [2024-11-07 09:52:47.428424] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:19.848 [2024-11-07 09:52:47.428435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:19.848 [2024-11-07 09:52:47.428443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:19.848 [2024-11-07 09:52:47.428465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:19.848 [2024-11-07 09:52:47.428486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:19.848 [2024-11-07 09:52:47.428500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:19.848 [2024-11-07 09:52:47.428506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:19.848 [2024-11-07 09:52:47.428513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:19.848 [2024-11-07 09:52:47.428519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:19.848 [2024-11-07 09:52:47.428526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:19.848 [2024-11-07 09:52:47.428536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:19.848 [2024-11-07 09:52:47.428550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:19.848 [2024-11-07 09:52:47.428570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:19.848 [2024-11-07 09:52:47.428589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:19.848 [2024-11-07 09:52:47.428609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:19.848 [2024-11-07 09:52:47.428790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.848 [2024-11-07 09:52:47.428841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:19.848 [2024-11-07 09:52:47.428860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:19.848 [2024-11-07 09:52:47.428877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:19.848 [2024-11-07 09:52:47.428936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:19.848 [2024-11-07 09:52:47.428958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:19.849 [2024-11-07 09:52:47.428977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:19.849 [2024-11-07 09:52:47.428995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:19.849 [2024-11-07 09:52:47.429013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:19.849 [2024-11-07 09:52:47.429030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.849 [2024-11-07 09:52:47.429088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:19.849 [2024-11-07 09:52:47.429111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:19.849 [2024-11-07 09:52:47.429130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.849 [2024-11-07 09:52:47.429148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:19.849 [2024-11-07 09:52:47.429166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:19.849 [2024-11-07 09:52:47.429184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:19.849 [2024-11-07 09:52:47.429241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.849 [2024-11-07 09:52:47.429263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:19.849 [2024-11-07 09:52:47.429282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:19.849 [2024-11-07 09:52:47.429302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:19.849 [2024-11-07 09:52:47.429320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:19.849 [2024-11-07 09:52:47.429369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:19.849 [2024-11-07 09:52:47.429391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:19.849 [2024-11-07 09:52:47.429411] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:19.849 [2024-11-07 09:52:47.429441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:19.849 [2024-11-07 09:52:47.429531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:19.849 [2024-11-07 09:52:47.429559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:19.849 [2024-11-07 09:52:47.429587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:19.849 [2024-11-07 09:52:47.429649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:19.849 [2024-11-07 09:52:47.429679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:19.849 [2024-11-07 09:52:47.429707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:19.849 [2024-11-07 09:52:47.429793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:19.849 [2024-11-07 09:52:47.429821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:19.849 [2024-11-07 09:52:47.429830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:19.849 [2024-11-07 09:52:47.429865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:19.849 [2024-11-07 09:52:47.429878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:19.849 [2024-11-07 09:52:47.429893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:19.849 [2024-11-07 09:52:47.429901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:19.849 [2024-11-07 09:52:47.429908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:19.849 [2024-11-07 09:52:47.429917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.429924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:19.849 [2024-11-07 09:52:47.429932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:24:19.849 [2024-11-07 09:52:47.429939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.455086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.455118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:19.849 [2024-11-07 09:52:47.455129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.079 ms 00:24:19.849 [2024-11-07 09:52:47.455137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.455218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.455226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:19.849 [2024-11-07 09:52:47.455233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:19.849 [2024-11-07 09:52:47.455240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.497778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.497815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:19.849 [2024-11-07 09:52:47.497827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.494 ms 00:24:19.849 [2024-11-07 09:52:47.497835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.497873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.497883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:19.849 [2024-11-07 09:52:47.497891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:19.849 [2024-11-07 09:52:47.497901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.498240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.498255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:19.849 [2024-11-07 09:52:47.498263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:19.849 [2024-11-07 09:52:47.498271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.498389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.498398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:19.849 [2024-11-07 09:52:47.498407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:19.849 [2024-11-07 09:52:47.498416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.849 [2024-11-07 09:52:47.511202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.849 [2024-11-07 09:52:47.511331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:19.849 [2024-11-07 09:52:47.511349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.767 ms 00:24:19.849 [2024-11-07 09:52:47.511358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.523451] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:20.108 [2024-11-07 09:52:47.523483] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:20.108 [2024-11-07 09:52:47.523495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.523503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:20.108 [2024-11-07 09:52:47.523511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.051 ms 00:24:20.108 [2024-11-07 09:52:47.523518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.547313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.547350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:20.108 [2024-11-07 09:52:47.547361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.759 ms 00:24:20.108 [2024-11-07 09:52:47.547369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.558623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.558658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:20.108 [2024-11-07 09:52:47.558668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.216 ms 00:24:20.108 [2024-11-07 09:52:47.558674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.569746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.569858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:20.108 [2024-11-07 09:52:47.569872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.042 ms 00:24:20.108 [2024-11-07 09:52:47.569879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.570469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.570487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:20.108 [2024-11-07 09:52:47.570495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:24:20.108 [2024-11-07 09:52:47.570505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.623968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.624011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:20.108 [2024-11-07 09:52:47.624027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.446 ms 00:24:20.108 [2024-11-07 09:52:47.624036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.634042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:20.108 [2024-11-07 09:52:47.636265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.636293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:20.108 [2024-11-07 09:52:47.636305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.186 ms 00:24:20.108 [2024-11-07 09:52:47.636313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.636393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.636405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:20.108 [2024-11-07 09:52:47.636414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:20.108 [2024-11-07 09:52:47.636425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.636980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.637005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:20.108 [2024-11-07 09:52:47.637014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:24:20.108 [2024-11-07 09:52:47.637021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.637042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.637050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:20.108 [2024-11-07 09:52:47.637058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:20.108 [2024-11-07 09:52:47.637065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.637097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:20.108 [2024-11-07 09:52:47.637109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.637116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:20.108 [2024-11-07 09:52:47.637123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:20.108 [2024-11-07 09:52:47.637131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.659403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.659433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:20.108 [2024-11-07 09:52:47.659444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.255 ms 00:24:20.108 [2024-11-07 09:52:47.659456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.659525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.108 [2024-11-07 09:52:47.659534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:20.108 [2024-11-07 09:52:47.659542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:20.108 [2024-11-07 09:52:47.659549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.108 [2024-11-07 09:52:47.660448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.290 ms, result 0 00:24:21.481  [2024-11-07T09:52:50.084Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-07T09:52:51.016Z] Copying: 99/1024 [MB] (51 MBps) [2024-11-07T09:52:51.948Z] Copying: 148/1024 [MB] (49 MBps) [2024-11-07T09:52:52.880Z] Copying: 196/1024 [MB] (48 MBps) [2024-11-07T09:52:54.252Z] Copying: 245/1024 [MB] (48 MBps) [2024-11-07T09:52:55.185Z] Copying: 292/1024 [MB] (46 MBps) [2024-11-07T09:52:56.121Z] Copying: 337/1024 [MB] (45 MBps) [2024-11-07T09:52:57.053Z] Copying: 383/1024 [MB] (45 MBps) [2024-11-07T09:52:57.986Z] Copying: 429/1024 [MB] (46 MBps) [2024-11-07T09:52:58.919Z] Copying: 475/1024 [MB] (45 MBps) [2024-11-07T09:52:59.853Z] Copying: 518/1024 [MB] (43 MBps) [2024-11-07T09:53:01.240Z] Copying: 554/1024 [MB] (35 MBps) [2024-11-07T09:53:01.835Z] Copying: 571/1024 [MB] (16 MBps) [2024-11-07T09:53:03.211Z] Copying: 589/1024 [MB] (18 MBps) [2024-11-07T09:53:04.143Z] Copying: 623/1024 [MB] (33 MBps) [2024-11-07T09:53:05.079Z] Copying: 668/1024 [MB] (45 MBps) [2024-11-07T09:53:06.148Z] Copying: 712/1024 [MB] (43 MBps) [2024-11-07T09:53:07.087Z] Copying: 745/1024 [MB] (33 MBps) [2024-11-07T09:53:08.025Z] Copying: 774/1024 [MB] (28 MBps) [2024-11-07T09:53:08.966Z] Copying: 807/1024 [MB] (33 MBps) [2024-11-07T09:53:09.908Z] Copying: 824/1024 [MB] (17 MBps) [2024-11-07T09:53:10.850Z] Copying: 856/1024 [MB] (31 MBps) [2024-11-07T09:53:12.235Z] Copying: 878/1024 [MB] (22 MBps) [2024-11-07T09:53:13.179Z] Copying: 900/1024 [MB] (22 MBps) [2024-11-07T09:53:14.120Z] Copying: 921/1024 [MB] (20 MBps) [2024-11-07T09:53:15.061Z] Copying: 945/1024 [MB] (23 MBps) [2024-11-07T09:53:16.004Z] Copying: 978/1024 [MB] (32 MBps) [2024-11-07T09:53:16.947Z] Copying: 1001/1024 [MB] (23 MBps) [2024-11-07T09:53:17.519Z] Copying: 1016/1024 [MB] (14 MBps) [2024-11-07T09:53:17.519Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-07 09:53:17.414000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.848 [2024-11-07 09:53:17.414073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.848 [2024-11-07 09:53:17.414093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.848 [2024-11-07 09:53:17.414105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.848 [2024-11-07 09:53:17.414136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.848 [2024-11-07 09:53:17.418748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.848 [2024-11-07 09:53:17.418790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.848 [2024-11-07 09:53:17.418811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.591 ms 00:24:49.848 [2024-11-07 09:53:17.418822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.848 [2024-11-07 09:53:17.419142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.848 [2024-11-07 09:53:17.419156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.848 [2024-11-07 09:53:17.419168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:24:49.848 [2024-11-07 09:53:17.419180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.848 [2024-11-07 09:53:17.424423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.848 [2024-11-07 09:53:17.424443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.848 [2024-11-07 09:53:17.424453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.224 ms 00:24:49.848 [2024-11-07 09:53:17.424461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.848 [2024-11-07 09:53:17.430606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.848 [2024-11-07 09:53:17.430745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.848 [2024-11-07 09:53:17.430761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:24:49.848 [2024-11-07 09:53:17.430769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.848 [2024-11-07 09:53:17.454705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.848 [2024-11-07 09:53:17.454735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.849 [2024-11-07 09:53:17.454745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.887 ms 00:24:49.849 [2024-11-07 09:53:17.454752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.849 [2024-11-07 09:53:17.469066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.849 [2024-11-07 09:53:17.469095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.849 [2024-11-07 09:53:17.469107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.284 ms 00:24:49.849 [2024-11-07 09:53:17.469114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.849 [2024-11-07 09:53:17.473368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.849 [2024-11-07 09:53:17.473402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:49.849 [2024-11-07 09:53:17.473411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:24:49.849 [2024-11-07 09:53:17.473419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.849 [2024-11-07 09:53:17.496926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.849 [2024-11-07 09:53:17.497043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:49.849 [2024-11-07 09:53:17.497058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.494 ms 00:24:49.849 [2024-11-07 09:53:17.497065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.111 [2024-11-07 09:53:17.520026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.111 [2024-11-07 09:53:17.520139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.111 [2024-11-07 09:53:17.520153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.934 ms 00:24:50.111 [2024-11-07 09:53:17.520159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.111 [2024-11-07 09:53:17.543560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.111 [2024-11-07 09:53:17.543699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.111 [2024-11-07 09:53:17.543715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.339 ms 00:24:50.111 [2024-11-07 09:53:17.543722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.111 [2024-11-07 09:53:17.566942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.111 [2024-11-07 09:53:17.567046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.111 [2024-11-07 09:53:17.567060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.172 ms 00:24:50.111 [2024-11-07 09:53:17.567067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.111 [2024-11-07 09:53:17.567093] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.111 [2024-11-07 09:53:17.567106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:50.111 [2024-11-07 09:53:17.567120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:50.111 [2024-11-07 09:53:17.567129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.111 [2024-11-07 09:53:17.567136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.112 [2024-11-07 09:53:17.567832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.113 [2024-11-07 09:53:17.567898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.113 [2024-11-07 09:53:17.567908] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4d0892d7-c7ff-4fc8-aad2-a34eb0dcb004 00:24:50.113 [2024-11-07 09:53:17.567916] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:50.113 [2024-11-07 09:53:17.567923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:50.113 [2024-11-07 09:53:17.567929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:50.113 [2024-11-07 09:53:17.567937] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:50.113 [2024-11-07 09:53:17.567943] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.113 [2024-11-07 09:53:17.567951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.113 [2024-11-07 09:53:17.567964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.113 [2024-11-07 09:53:17.567970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.113 [2024-11-07 09:53:17.567977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.113 [2024-11-07 09:53:17.567983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.113 [2024-11-07 09:53:17.567991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.113 [2024-11-07 09:53:17.567999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:24:50.113 [2024-11-07 09:53:17.568006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.580385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.113 [2024-11-07 09:53:17.580413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.113 [2024-11-07 09:53:17.580424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.361 ms 00:24:50.113 [2024-11-07 09:53:17.580431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.580799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.113 [2024-11-07 09:53:17.580809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.113 [2024-11-07 09:53:17.580822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:24:50.113 [2024-11-07 09:53:17.580830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.613161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.613192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.113 [2024-11-07 09:53:17.613202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.613209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.613258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.613266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.113 [2024-11-07 09:53:17.613277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.613285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.613335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.613345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.113 [2024-11-07 09:53:17.613352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.613359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.613373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.613380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.113 [2024-11-07 09:53:17.613387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.613397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.690957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.691004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.113 [2024-11-07 09:53:17.691015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.691024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.752680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.752851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.113 [2024-11-07 09:53:17.752868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.752880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.752947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.752957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.113 [2024-11-07 09:53:17.752965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.752973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.753005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.753013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.113 [2024-11-07 09:53:17.753021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.753029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.753119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.753129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.113 [2024-11-07 09:53:17.753137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.753143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.753170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.753179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.113 [2024-11-07 09:53:17.753186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.753193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.753229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.753238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.113 [2024-11-07 09:53:17.753246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.753253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.753289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.113 [2024-11-07 09:53:17.753298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.113 [2024-11-07 09:53:17.753305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.113 [2024-11-07 09:53:17.753312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.113 [2024-11-07 09:53:17.753418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.404 ms, result 0 00:24:51.055 00:24:51.055 00:24:51.055 09:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:52.966 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:52.966 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:52.966 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:52.966 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:52.966 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:53.226 Process with pid 76924 is not found 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76924 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 76924 ']' 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 76924 00:24:53.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76924) - No such process 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 76924 is not found' 00:24:53.226 09:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:53.488 Remove shared memory files 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:53.488 ************************************ 00:24:53.488 END TEST ftl_dirty_shutdown 00:24:53.488 ************************************ 00:24:53.488 00:24:53.488 real 3m0.495s 00:24:53.488 user 3m19.194s 00:24:53.488 sys 0m24.395s 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:53.488 09:53:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:53.749 09:53:21 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:53.749 09:53:21 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:53.749 09:53:21 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:53.749 09:53:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:53.749 ************************************ 00:24:53.749 START TEST ftl_upgrade_shutdown 00:24:53.749 ************************************ 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:53.749 * Looking for test storage... 00:24:53.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:53.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.749 --rc genhtml_branch_coverage=1 00:24:53.749 --rc genhtml_function_coverage=1 00:24:53.749 --rc genhtml_legend=1 00:24:53.749 --rc geninfo_all_blocks=1 00:24:53.749 --rc geninfo_unexecuted_blocks=1 00:24:53.749 00:24:53.749 ' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:53.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.749 --rc genhtml_branch_coverage=1 00:24:53.749 --rc genhtml_function_coverage=1 00:24:53.749 --rc genhtml_legend=1 00:24:53.749 --rc geninfo_all_blocks=1 00:24:53.749 --rc geninfo_unexecuted_blocks=1 00:24:53.749 00:24:53.749 ' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:53.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.749 --rc genhtml_branch_coverage=1 00:24:53.749 --rc genhtml_function_coverage=1 00:24:53.749 --rc genhtml_legend=1 00:24:53.749 --rc geninfo_all_blocks=1 00:24:53.749 --rc geninfo_unexecuted_blocks=1 00:24:53.749 00:24:53.749 ' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:53.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.749 --rc genhtml_branch_coverage=1 00:24:53.749 --rc genhtml_function_coverage=1 00:24:53.749 --rc genhtml_legend=1 00:24:53.749 --rc geninfo_all_blocks=1 00:24:53.749 --rc geninfo_unexecuted_blocks=1 00:24:53.749 00:24:53.749 ' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:53.749 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78924 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78924 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78924 ']' 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:53.750 09:53:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:54.010 [2024-11-07 09:53:21.458762] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:24:54.010 [2024-11-07 09:53:21.459035] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78924 ] 00:24:54.010 [2024-11-07 09:53:21.617465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.271 [2024-11-07 09:53:21.715075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:54.849 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:55.110 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:55.371 { 00:24:55.371 "name": "basen1", 00:24:55.371 "aliases": [ 00:24:55.371 "2e378ae0-f172-4446-85b4-bb03e6f775e7" 00:24:55.371 ], 00:24:55.371 "product_name": "NVMe disk", 00:24:55.371 "block_size": 4096, 00:24:55.371 "num_blocks": 1310720, 00:24:55.371 "uuid": "2e378ae0-f172-4446-85b4-bb03e6f775e7", 00:24:55.371 "numa_id": -1, 00:24:55.371 "assigned_rate_limits": { 00:24:55.371 "rw_ios_per_sec": 0, 00:24:55.371 "rw_mbytes_per_sec": 0, 00:24:55.371 "r_mbytes_per_sec": 0, 00:24:55.371 "w_mbytes_per_sec": 0 00:24:55.371 }, 00:24:55.371 "claimed": true, 00:24:55.371 "claim_type": "read_many_write_one", 00:24:55.371 "zoned": false, 00:24:55.371 "supported_io_types": { 00:24:55.371 "read": true, 00:24:55.371 "write": true, 00:24:55.371 "unmap": true, 00:24:55.371 "flush": true, 00:24:55.371 "reset": true, 00:24:55.371 "nvme_admin": true, 00:24:55.371 "nvme_io": true, 00:24:55.371 "nvme_io_md": false, 00:24:55.371 "write_zeroes": true, 00:24:55.371 "zcopy": false, 00:24:55.371 "get_zone_info": false, 00:24:55.371 "zone_management": false, 00:24:55.371 "zone_append": false, 00:24:55.371 "compare": true, 00:24:55.371 "compare_and_write": false, 00:24:55.371 "abort": true, 00:24:55.371 "seek_hole": false, 00:24:55.371 "seek_data": false, 00:24:55.371 "copy": true, 00:24:55.371 "nvme_iov_md": false 00:24:55.371 }, 00:24:55.371 "driver_specific": { 00:24:55.371 "nvme": [ 00:24:55.371 { 00:24:55.371 "pci_address": "0000:00:11.0", 00:24:55.371 "trid": { 00:24:55.371 "trtype": "PCIe", 00:24:55.371 "traddr": "0000:00:11.0" 00:24:55.371 }, 00:24:55.371 "ctrlr_data": { 00:24:55.371 "cntlid": 0, 00:24:55.371 "vendor_id": "0x1b36", 00:24:55.371 "model_number": "QEMU NVMe Ctrl", 00:24:55.371 "serial_number": "12341", 00:24:55.371 "firmware_revision": "8.0.0", 00:24:55.371 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:55.371 "oacs": { 00:24:55.371 "security": 0, 00:24:55.371 "format": 1, 00:24:55.371 "firmware": 0, 00:24:55.371 "ns_manage": 1 00:24:55.371 }, 00:24:55.371 "multi_ctrlr": false, 00:24:55.371 "ana_reporting": false 00:24:55.371 }, 00:24:55.371 "vs": { 00:24:55.371 "nvme_version": "1.4" 00:24:55.371 }, 00:24:55.371 "ns_data": { 00:24:55.371 "id": 1, 00:24:55.371 "can_share": false 00:24:55.371 } 00:24:55.371 } 00:24:55.371 ], 00:24:55.371 "mp_policy": "active_passive" 00:24:55.371 } 00:24:55.371 } 00:24:55.371 ]' 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:55.371 09:53:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:55.632 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e95e26f1-4a0a-4bc8-ad09-596d8025d015 00:24:55.632 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:55.632 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e95e26f1-4a0a-4bc8-ad09-596d8025d015 00:24:55.894 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:55.894 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=15d36886-4c77-448f-918e-1560ac4937bc 00:24:55.894 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 15d36886-4c77-448f-918e-1560ac4937bc 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=c54de725-a81d-4b12-8f15-68a672be1863 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z c54de725-a81d-4b12-8f15-68a672be1863 ]] 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 c54de725-a81d-4b12-8f15-68a672be1863 5120 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=c54de725-a81d-4b12-8f15-68a672be1863 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:56.156 09:53:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size c54de725-a81d-4b12-8f15-68a672be1863 00:24:56.157 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=c54de725-a81d-4b12-8f15-68a672be1863 00:24:56.157 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:56.157 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:56.157 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:56.157 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c54de725-a81d-4b12-8f15-68a672be1863 00:24:56.418 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:56.418 { 00:24:56.418 "name": "c54de725-a81d-4b12-8f15-68a672be1863", 00:24:56.418 "aliases": [ 00:24:56.418 "lvs/basen1p0" 00:24:56.418 ], 00:24:56.418 "product_name": "Logical Volume", 00:24:56.418 "block_size": 4096, 00:24:56.418 "num_blocks": 5242880, 00:24:56.418 "uuid": "c54de725-a81d-4b12-8f15-68a672be1863", 00:24:56.418 "assigned_rate_limits": { 00:24:56.418 "rw_ios_per_sec": 0, 00:24:56.418 "rw_mbytes_per_sec": 0, 00:24:56.418 "r_mbytes_per_sec": 0, 00:24:56.418 "w_mbytes_per_sec": 0 00:24:56.418 }, 00:24:56.418 "claimed": false, 00:24:56.418 "zoned": false, 00:24:56.418 "supported_io_types": { 00:24:56.418 "read": true, 00:24:56.418 "write": true, 00:24:56.418 "unmap": true, 00:24:56.418 "flush": false, 00:24:56.418 "reset": true, 00:24:56.418 "nvme_admin": false, 00:24:56.418 "nvme_io": false, 00:24:56.418 "nvme_io_md": false, 00:24:56.418 "write_zeroes": true, 00:24:56.418 "zcopy": false, 00:24:56.418 "get_zone_info": false, 00:24:56.418 "zone_management": false, 00:24:56.418 "zone_append": false, 00:24:56.418 "compare": false, 00:24:56.418 "compare_and_write": false, 00:24:56.418 "abort": false, 00:24:56.418 "seek_hole": true, 00:24:56.418 "seek_data": true, 00:24:56.418 "copy": false, 00:24:56.418 "nvme_iov_md": false 00:24:56.418 }, 00:24:56.418 "driver_specific": { 00:24:56.418 "lvol": { 00:24:56.418 "lvol_store_uuid": "15d36886-4c77-448f-918e-1560ac4937bc", 00:24:56.418 "base_bdev": "basen1", 00:24:56.418 "thin_provision": true, 00:24:56.418 "num_allocated_clusters": 0, 00:24:56.418 "snapshot": false, 00:24:56.418 "clone": false, 00:24:56.418 "esnap_clone": false 00:24:56.418 } 00:24:56.418 } 00:24:56.418 } 00:24:56.418 ]' 00:24:56.418 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:56.418 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:56.418 09:53:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:56.418 09:53:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:24:56.418 09:53:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:24:56.418 09:53:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:24:56.418 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:56.418 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:56.418 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:56.680 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:56.680 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:56.680 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:56.942 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:56.942 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:56.942 09:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d c54de725-a81d-4b12-8f15-68a672be1863 -c cachen1p0 --l2p_dram_limit 2 00:24:57.206 [2024-11-07 09:53:24.651052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.651101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:57.206 [2024-11-07 09:53:24.651117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:57.206 [2024-11-07 09:53:24.651125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.651180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.651190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:57.206 [2024-11-07 09:53:24.651200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:24:57.206 [2024-11-07 09:53:24.651208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.651228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:57.206 [2024-11-07 09:53:24.652011] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:57.206 [2024-11-07 09:53:24.652031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.652039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:57.206 [2024-11-07 09:53:24.652048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.805 ms 00:24:57.206 [2024-11-07 09:53:24.652056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.652090] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID af303546-c6d2-4200-836f-b9c072444d67 00:24:57.206 [2024-11-07 09:53:24.653259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.653296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:57.206 [2024-11-07 09:53:24.653306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:24:57.206 [2024-11-07 09:53:24.653315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.658648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.658677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:57.206 [2024-11-07 09:53:24.658690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.261 ms 00:24:57.206 [2024-11-07 09:53:24.658700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.658738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.658748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:57.206 [2024-11-07 09:53:24.658756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:24:57.206 [2024-11-07 09:53:24.658766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.658815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.658826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:57.206 [2024-11-07 09:53:24.658834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:57.206 [2024-11-07 09:53:24.658847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.658868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:57.206 [2024-11-07 09:53:24.662463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.662494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:57.206 [2024-11-07 09:53:24.662507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.598 ms 00:24:57.206 [2024-11-07 09:53:24.662516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.662543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.662553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:57.206 [2024-11-07 09:53:24.662563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:57.206 [2024-11-07 09:53:24.662571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.662589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:57.206 [2024-11-07 09:53:24.662744] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:57.206 [2024-11-07 09:53:24.662761] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:57.206 [2024-11-07 09:53:24.662773] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:24:57.206 [2024-11-07 09:53:24.662785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:57.206 [2024-11-07 09:53:24.662795] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:57.206 [2024-11-07 09:53:24.662805] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:57.206 [2024-11-07 09:53:24.662814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:57.206 [2024-11-07 09:53:24.662827] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:57.206 [2024-11-07 09:53:24.662835] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:57.206 [2024-11-07 09:53:24.662845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.662853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:57.206 [2024-11-07 09:53:24.662863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:24:57.206 [2024-11-07 09:53:24.662872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.662957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.206 [2024-11-07 09:53:24.662966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:57.206 [2024-11-07 09:53:24.662977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:24:57.206 [2024-11-07 09:53:24.662991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.206 [2024-11-07 09:53:24.663106] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:57.206 [2024-11-07 09:53:24.663121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:57.206 [2024-11-07 09:53:24.663132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:57.206 [2024-11-07 09:53:24.663141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.206 [2024-11-07 09:53:24.663151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:57.206 [2024-11-07 09:53:24.663159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:57.206 [2024-11-07 09:53:24.663168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:57.206 [2024-11-07 09:53:24.663176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:57.206 [2024-11-07 09:53:24.663186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:57.206 [2024-11-07 09:53:24.663193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.206 [2024-11-07 09:53:24.663202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:57.206 [2024-11-07 09:53:24.663211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:57.206 [2024-11-07 09:53:24.663219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.206 [2024-11-07 09:53:24.663227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:57.206 [2024-11-07 09:53:24.663236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:57.206 [2024-11-07 09:53:24.663244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.206 [2024-11-07 09:53:24.663254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:57.206 [2024-11-07 09:53:24.663262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:57.206 [2024-11-07 09:53:24.663272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.206 [2024-11-07 09:53:24.663280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:57.206 [2024-11-07 09:53:24.663306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:57.206 [2024-11-07 09:53:24.663315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:57.206 [2024-11-07 09:53:24.663326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:57.206 [2024-11-07 09:53:24.663332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:57.206 [2024-11-07 09:53:24.663340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:57.206 [2024-11-07 09:53:24.663347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:57.206 [2024-11-07 09:53:24.663356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:57.206 [2024-11-07 09:53:24.663362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:57.206 [2024-11-07 09:53:24.663370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:57.206 [2024-11-07 09:53:24.663377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:57.206 [2024-11-07 09:53:24.663385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:57.207 [2024-11-07 09:53:24.663391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:57.207 [2024-11-07 09:53:24.663401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:57.207 [2024-11-07 09:53:24.663407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.207 [2024-11-07 09:53:24.663415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:57.207 [2024-11-07 09:53:24.663422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:57.207 [2024-11-07 09:53:24.663430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.207 [2024-11-07 09:53:24.663437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:57.207 [2024-11-07 09:53:24.663445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:57.207 [2024-11-07 09:53:24.663452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.207 [2024-11-07 09:53:24.663460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:57.207 [2024-11-07 09:53:24.663466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:57.207 [2024-11-07 09:53:24.663474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.207 [2024-11-07 09:53:24.663480] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:57.207 [2024-11-07 09:53:24.663489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:57.207 [2024-11-07 09:53:24.663496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:57.207 [2024-11-07 09:53:24.663506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:57.207 [2024-11-07 09:53:24.663514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:57.207 [2024-11-07 09:53:24.663523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:57.207 [2024-11-07 09:53:24.663530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:57.207 [2024-11-07 09:53:24.663538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:57.207 [2024-11-07 09:53:24.663544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:57.207 [2024-11-07 09:53:24.663552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:57.207 [2024-11-07 09:53:24.663562] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:57.207 [2024-11-07 09:53:24.663573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:57.207 [2024-11-07 09:53:24.663592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:57.207 [2024-11-07 09:53:24.663615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:57.207 [2024-11-07 09:53:24.663623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:57.207 [2024-11-07 09:53:24.663641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:57.207 [2024-11-07 09:53:24.663650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:57.207 [2024-11-07 09:53:24.663707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:57.207 [2024-11-07 09:53:24.663717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:57.207 [2024-11-07 09:53:24.663733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:57.207 [2024-11-07 09:53:24.663740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:57.207 [2024-11-07 09:53:24.663749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:57.207 [2024-11-07 09:53:24.663757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.207 [2024-11-07 09:53:24.663765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:57.207 [2024-11-07 09:53:24.663773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.720 ms 00:24:57.207 [2024-11-07 09:53:24.663782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.207 [2024-11-07 09:53:24.663818] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:57.207 [2024-11-07 09:53:24.663831] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:02.502 [2024-11-07 09:53:29.545357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.545420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:02.502 [2024-11-07 09:53:29.545435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4881.521 ms 00:25:02.502 [2024-11-07 09:53:29.545446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.570436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.570483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:02.502 [2024-11-07 09:53:29.570495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.787 ms 00:25:02.502 [2024-11-07 09:53:29.570505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.570573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.570586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:02.502 [2024-11-07 09:53:29.570594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:02.502 [2024-11-07 09:53:29.570606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.600697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.600841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:02.502 [2024-11-07 09:53:29.600857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.025 ms 00:25:02.502 [2024-11-07 09:53:29.600866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.600896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.600910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:02.502 [2024-11-07 09:53:29.600918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:02.502 [2024-11-07 09:53:29.600927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.601255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.601282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:02.502 [2024-11-07 09:53:29.601291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:25:02.502 [2024-11-07 09:53:29.601301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.601343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.601353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:02.502 [2024-11-07 09:53:29.601363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:02.502 [2024-11-07 09:53:29.601374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.615236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.615269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:02.502 [2024-11-07 09:53:29.615279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.846 ms 00:25:02.502 [2024-11-07 09:53:29.615303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.626901] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:02.502 [2024-11-07 09:53:29.627736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.627787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:02.502 [2024-11-07 09:53:29.627801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.366 ms 00:25:02.502 [2024-11-07 09:53:29.627809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.669647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.669685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:02.502 [2024-11-07 09:53:29.669700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.811 ms 00:25:02.502 [2024-11-07 09:53:29.669708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.669791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.669804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:02.502 [2024-11-07 09:53:29.669816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:25:02.502 [2024-11-07 09:53:29.669824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.692826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.692855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:02.502 [2024-11-07 09:53:29.692869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.959 ms 00:25:02.502 [2024-11-07 09:53:29.692877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.715838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.715953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:02.502 [2024-11-07 09:53:29.715972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.922 ms 00:25:02.502 [2024-11-07 09:53:29.715979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.716538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.716549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:02.502 [2024-11-07 09:53:29.716559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:25:02.502 [2024-11-07 09:53:29.716566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.502 [2024-11-07 09:53:29.789338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.502 [2024-11-07 09:53:29.789372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:02.503 [2024-11-07 09:53:29.789387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.737 ms 00:25:02.503 [2024-11-07 09:53:29.789396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.503 [2024-11-07 09:53:29.813977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.503 [2024-11-07 09:53:29.814010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:02.503 [2024-11-07 09:53:29.814028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.513 ms 00:25:02.503 [2024-11-07 09:53:29.814036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.503 [2024-11-07 09:53:29.837525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.503 [2024-11-07 09:53:29.837555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:02.503 [2024-11-07 09:53:29.837567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.455 ms 00:25:02.503 [2024-11-07 09:53:29.837577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.503 [2024-11-07 09:53:29.861435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.503 [2024-11-07 09:53:29.861466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:02.503 [2024-11-07 09:53:29.861478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.823 ms 00:25:02.503 [2024-11-07 09:53:29.861485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.503 [2024-11-07 09:53:29.861524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.503 [2024-11-07 09:53:29.861534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:02.503 [2024-11-07 09:53:29.861546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:02.503 [2024-11-07 09:53:29.861553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.503 [2024-11-07 09:53:29.861645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:02.503 [2024-11-07 09:53:29.861656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:02.503 [2024-11-07 09:53:29.861669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:25:02.503 [2024-11-07 09:53:29.861676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:02.503 [2024-11-07 09:53:29.862504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5211.052 ms, result 0 00:25:02.503 { 00:25:02.503 "name": "ftl", 00:25:02.503 "uuid": "af303546-c6d2-4200-836f-b9c072444d67" 00:25:02.503 } 00:25:02.503 09:53:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:02.503 [2024-11-07 09:53:30.065932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.503 09:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:02.765 09:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:03.027 [2024-11-07 09:53:30.478333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:03.027 09:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:03.027 [2024-11-07 09:53:30.678686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:03.027 09:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:03.600 Fill FTL, iteration 1 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79062 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:03.600 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79062 /var/tmp/spdk.tgt.sock 00:25:03.601 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79062 ']' 00:25:03.601 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:03.601 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:03.601 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:03.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:03.601 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:03.601 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:03.601 [2024-11-07 09:53:31.155854] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:03.601 [2024-11-07 09:53:31.156135] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79062 ] 00:25:03.862 [2024-11-07 09:53:31.307267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.862 [2024-11-07 09:53:31.409202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.436 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:04.436 09:53:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:25:04.436 09:53:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:04.696 ftln1 00:25:04.696 09:53:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:04.696 09:53:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79062 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79062 ']' 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79062 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:04.957 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79062 00:25:04.958 killing process with pid 79062 00:25:04.958 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:04.958 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:04.958 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79062' 00:25:04.958 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79062 00:25:04.958 09:53:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79062 00:25:06.366 09:53:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:06.366 09:53:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:06.366 [2024-11-07 09:53:33.989182] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:06.366 [2024-11-07 09:53:33.989295] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79105 ] 00:25:06.666 [2024-11-07 09:53:34.149515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.666 [2024-11-07 09:53:34.244880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.053  [2024-11-07T09:53:36.664Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-07T09:53:37.607Z] Copying: 332/1024 [MB] (143 MBps) [2024-11-07T09:53:39.020Z] Copying: 530/1024 [MB] (198 MBps) [2024-11-07T09:53:39.960Z] Copying: 715/1024 [MB] (185 MBps) [2024-11-07T09:53:40.218Z] Copying: 909/1024 [MB] (194 MBps) [2024-11-07T09:53:41.153Z] Copying: 1024/1024 [MB] (average 183 MBps) 00:25:13.482 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:13.482 Calculate MD5 checksum, iteration 1 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:13.482 09:53:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:13.482 [2024-11-07 09:53:40.872444] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:13.482 [2024-11-07 09:53:40.872703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79180 ] 00:25:13.482 [2024-11-07 09:53:41.028635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.482 [2024-11-07 09:53:41.103699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.855  [2024-11-07T09:53:43.090Z] Copying: 695/1024 [MB] (695 MBps) [2024-11-07T09:53:43.348Z] Copying: 1024/1024 [MB] (average 689 MBps) 00:25:15.677 00:25:15.935 09:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:15.935 09:53:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:17.836 Fill FTL, iteration 2 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ce1316c788b6791a2424553ea844e7f1 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:17.836 09:53:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:17.836 [2024-11-07 09:53:45.454837] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:17.836 [2024-11-07 09:53:45.454923] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79230 ] 00:25:18.094 [2024-11-07 09:53:45.601934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.094 [2024-11-07 09:53:45.677198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.468  [2024-11-07T09:53:48.073Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-07T09:53:49.006Z] Copying: 518/1024 [MB] (258 MBps) [2024-11-07T09:53:49.939Z] Copying: 780/1024 [MB] (262 MBps) [2024-11-07T09:53:50.505Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:25:22.834 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:23.092 Calculate MD5 checksum, iteration 2 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:23.092 09:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:23.092 [2024-11-07 09:53:50.554975] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:23.092 [2024-11-07 09:53:50.555063] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79283 ] 00:25:23.092 [2024-11-07 09:53:50.701713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.351 [2024-11-07 09:53:50.778029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.725  [2024-11-07T09:53:52.963Z] Copying: 686/1024 [MB] (686 MBps) [2024-11-07T09:53:53.532Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:25:25.861 00:25:25.861 09:53:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:25.861 09:53:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:28.397 09:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:28.397 09:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fd474a2526e94e064290d623aefa30bb 00:25:28.397 09:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:28.397 09:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:28.397 09:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:28.397 [2024-11-07 09:53:55.804344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.397 [2024-11-07 09:53:55.804384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:28.397 [2024-11-07 09:53:55.804396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:28.397 [2024-11-07 09:53:55.804402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.397 [2024-11-07 09:53:55.804422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.397 [2024-11-07 09:53:55.804429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:28.397 [2024-11-07 09:53:55.804436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:28.397 [2024-11-07 09:53:55.804445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.397 [2024-11-07 09:53:55.804460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.397 [2024-11-07 09:53:55.804467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:28.397 [2024-11-07 09:53:55.804473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:28.397 [2024-11-07 09:53:55.804479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.397 [2024-11-07 09:53:55.804526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.173 ms, result 0 00:25:28.397 true 00:25:28.397 09:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.397 { 00:25:28.397 "name": "ftl", 00:25:28.397 "properties": [ 00:25:28.397 { 00:25:28.397 "name": "superblock_version", 00:25:28.397 "value": 5, 00:25:28.397 "read-only": true 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "name": "base_device", 00:25:28.397 "bands": [ 00:25:28.397 { 00:25:28.397 "id": 0, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 1, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 2, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 3, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 4, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 5, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 6, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 7, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 8, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 9, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 10, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 11, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 12, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 13, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 14, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 15, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 16, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 17, 00:25:28.397 "state": "FREE", 00:25:28.397 "validity": 0.0 00:25:28.397 } 00:25:28.397 ], 00:25:28.397 "read-only": true 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "name": "cache_device", 00:25:28.397 "type": "bdev", 00:25:28.397 "chunks": [ 00:25:28.397 { 00:25:28.397 "id": 0, 00:25:28.397 "state": "INACTIVE", 00:25:28.397 "utilization": 0.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 1, 00:25:28.397 "state": "CLOSED", 00:25:28.397 "utilization": 1.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 2, 00:25:28.397 "state": "CLOSED", 00:25:28.397 "utilization": 1.0 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 3, 00:25:28.397 "state": "OPEN", 00:25:28.397 "utilization": 0.001953125 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "id": 4, 00:25:28.397 "state": "OPEN", 00:25:28.397 "utilization": 0.0 00:25:28.397 } 00:25:28.397 ], 00:25:28.397 "read-only": true 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "name": "verbose_mode", 00:25:28.397 "value": true, 00:25:28.397 "unit": "", 00:25:28.397 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:28.397 }, 00:25:28.397 { 00:25:28.397 "name": "prep_upgrade_on_shutdown", 00:25:28.397 "value": false, 00:25:28.397 "unit": "", 00:25:28.398 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:28.398 } 00:25:28.398 ] 00:25:28.398 } 00:25:28.398 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:28.656 [2024-11-07 09:53:56.192682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.656 [2024-11-07 09:53:56.192718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:28.656 [2024-11-07 09:53:56.192728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:28.656 [2024-11-07 09:53:56.192734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.656 [2024-11-07 09:53:56.192752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.656 [2024-11-07 09:53:56.192760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:28.656 [2024-11-07 09:53:56.192766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:28.656 [2024-11-07 09:53:56.192772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.656 [2024-11-07 09:53:56.192787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.656 [2024-11-07 09:53:56.192793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:28.656 [2024-11-07 09:53:56.192799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:28.656 [2024-11-07 09:53:56.192804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.656 [2024-11-07 09:53:56.192847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:25:28.656 true 00:25:28.656 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:28.656 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:28.656 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:28.914 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:28.914 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:28.914 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:29.173 [2024-11-07 09:53:56.597021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.173 [2024-11-07 09:53:56.597056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:29.173 [2024-11-07 09:53:56.597066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:29.173 [2024-11-07 09:53:56.597072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.173 [2024-11-07 09:53:56.597088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.173 [2024-11-07 09:53:56.597095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:29.173 [2024-11-07 09:53:56.597100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:29.174 [2024-11-07 09:53:56.597107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.174 [2024-11-07 09:53:56.597121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.174 [2024-11-07 09:53:56.597127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:29.174 [2024-11-07 09:53:56.597133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:29.174 [2024-11-07 09:53:56.597138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.174 [2024-11-07 09:53:56.597193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.166 ms, result 0 00:25:29.174 true 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:29.174 { 00:25:29.174 "name": "ftl", 00:25:29.174 "properties": [ 00:25:29.174 { 00:25:29.174 "name": "superblock_version", 00:25:29.174 "value": 5, 00:25:29.174 "read-only": true 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "name": "base_device", 00:25:29.174 "bands": [ 00:25:29.174 { 00:25:29.174 "id": 0, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 1, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 2, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 3, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 4, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 5, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 6, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 7, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 8, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 9, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 10, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 11, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 12, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 13, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 14, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 15, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 16, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 17, 00:25:29.174 "state": "FREE", 00:25:29.174 "validity": 0.0 00:25:29.174 } 00:25:29.174 ], 00:25:29.174 "read-only": true 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "name": "cache_device", 00:25:29.174 "type": "bdev", 00:25:29.174 "chunks": [ 00:25:29.174 { 00:25:29.174 "id": 0, 00:25:29.174 "state": "INACTIVE", 00:25:29.174 "utilization": 0.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 1, 00:25:29.174 "state": "CLOSED", 00:25:29.174 "utilization": 1.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 2, 00:25:29.174 "state": "CLOSED", 00:25:29.174 "utilization": 1.0 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 3, 00:25:29.174 "state": "OPEN", 00:25:29.174 "utilization": 0.001953125 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "id": 4, 00:25:29.174 "state": "OPEN", 00:25:29.174 "utilization": 0.0 00:25:29.174 } 00:25:29.174 ], 00:25:29.174 "read-only": true 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "name": "verbose_mode", 00:25:29.174 "value": true, 00:25:29.174 "unit": "", 00:25:29.174 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:29.174 }, 00:25:29.174 { 00:25:29.174 "name": "prep_upgrade_on_shutdown", 00:25:29.174 "value": true, 00:25:29.174 "unit": "", 00:25:29.174 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:29.174 } 00:25:29.174 ] 00:25:29.174 } 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78924 ]] 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78924 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78924 ']' 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 78924 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78924 00:25:29.174 killing process with pid 78924 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78924' 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 78924 00:25:29.174 09:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 78924 00:25:29.741 [2024-11-07 09:53:57.362819] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:29.741 [2024-11-07 09:53:57.374915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.741 [2024-11-07 09:53:57.374952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:29.741 [2024-11-07 09:53:57.374962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:29.741 [2024-11-07 09:53:57.374968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.741 [2024-11-07 09:53:57.374985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:29.741 [2024-11-07 09:53:57.377110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.741 [2024-11-07 09:53:57.377138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:29.741 [2024-11-07 09:53:57.377145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.115 ms 00:25:29.741 [2024-11-07 09:53:57.377152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.727 [2024-11-07 09:54:05.543813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.727 [2024-11-07 09:54:05.543867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:39.727 [2024-11-07 09:54:05.543879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8166.614 ms 00:25:39.727 [2024-11-07 09:54:05.543886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.727 [2024-11-07 09:54:05.544870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.727 [2024-11-07 09:54:05.544889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:39.728 [2024-11-07 09:54:05.544896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:25:39.728 [2024-11-07 09:54:05.544902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.545772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.545792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:39.728 [2024-11-07 09:54:05.545799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:25:39.728 [2024-11-07 09:54:05.545805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.553407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.553436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:39.728 [2024-11-07 09:54:05.553444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.574 ms 00:25:39.728 [2024-11-07 09:54:05.553451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.558353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.558381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:39.728 [2024-11-07 09:54:05.558390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.875 ms 00:25:39.728 [2024-11-07 09:54:05.558396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.558449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.558457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:39.728 [2024-11-07 09:54:05.558464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:39.728 [2024-11-07 09:54:05.558473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.565460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.565486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:25:39.728 [2024-11-07 09:54:05.565494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.975 ms 00:25:39.728 [2024-11-07 09:54:05.565499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.572449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.572474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:25:39.728 [2024-11-07 09:54:05.572482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.927 ms 00:25:39.728 [2024-11-07 09:54:05.572487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.579550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.579575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:39.728 [2024-11-07 09:54:05.579582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.040 ms 00:25:39.728 [2024-11-07 09:54:05.579587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.586581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.586606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:39.728 [2024-11-07 09:54:05.586613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.937 ms 00:25:39.728 [2024-11-07 09:54:05.586618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.586647] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:39.728 [2024-11-07 09:54:05.586659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:39.728 [2024-11-07 09:54:05.586667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:39.728 [2024-11-07 09:54:05.586679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:39.728 [2024-11-07 09:54:05.586686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:39.728 [2024-11-07 09:54:05.586773] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:39.728 [2024-11-07 09:54:05.586779] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: af303546-c6d2-4200-836f-b9c072444d67 00:25:39.728 [2024-11-07 09:54:05.586784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:39.728 [2024-11-07 09:54:05.586790] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:39.728 [2024-11-07 09:54:05.586795] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:39.728 [2024-11-07 09:54:05.586801] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:39.728 [2024-11-07 09:54:05.586806] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:39.728 [2024-11-07 09:54:05.586812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:39.728 [2024-11-07 09:54:05.586820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:39.728 [2024-11-07 09:54:05.586824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:39.728 [2024-11-07 09:54:05.586829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:39.728 [2024-11-07 09:54:05.586834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.586840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:39.728 [2024-11-07 09:54:05.586848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:25:39.728 [2024-11-07 09:54:05.586854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.596305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.596329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:39.728 [2024-11-07 09:54:05.596337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.439 ms 00:25:39.728 [2024-11-07 09:54:05.596343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.596613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:39.728 [2024-11-07 09:54:05.596625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:39.728 [2024-11-07 09:54:05.596641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:25:39.728 [2024-11-07 09:54:05.596647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.629511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.728 [2024-11-07 09:54:05.629538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:39.728 [2024-11-07 09:54:05.629547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.728 [2024-11-07 09:54:05.629556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.629578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.728 [2024-11-07 09:54:05.629585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:39.728 [2024-11-07 09:54:05.629590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.728 [2024-11-07 09:54:05.629596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.629649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.728 [2024-11-07 09:54:05.629657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:39.728 [2024-11-07 09:54:05.629663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.728 [2024-11-07 09:54:05.629669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.728 [2024-11-07 09:54:05.629683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.728 [2024-11-07 09:54:05.629689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:39.728 [2024-11-07 09:54:05.629695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.629700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.687735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.687768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:39.729 [2024-11-07 09:54:05.687776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.687781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.735985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:39.729 [2024-11-07 09:54:05.736025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:39.729 [2024-11-07 09:54:05.736105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:39.729 [2024-11-07 09:54:05.736159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:39.729 [2024-11-07 09:54:05.736243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:39.729 [2024-11-07 09:54:05.736286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:39.729 [2024-11-07 09:54:05.736332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:39.729 [2024-11-07 09:54:05.736380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:39.729 [2024-11-07 09:54:05.736387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:39.729 [2024-11-07 09:54:05.736392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:39.729 [2024-11-07 09:54:05.736482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8361.520 ms, result 0 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79470 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79470 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79470 ']' 00:25:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:40.665 09:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:40.665 [2024-11-07 09:54:08.160065] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:40.665 [2024-11-07 09:54:08.160176] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79470 ] 00:25:40.665 [2024-11-07 09:54:08.317214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.923 [2024-11-07 09:54:08.398575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.490 [2024-11-07 09:54:08.968699] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:41.490 [2024-11-07 09:54:08.968751] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:41.490 [2024-11-07 09:54:09.111776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.111826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:41.490 [2024-11-07 09:54:09.111837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:41.490 [2024-11-07 09:54:09.111843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.111888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.111897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:41.490 [2024-11-07 09:54:09.111904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:41.490 [2024-11-07 09:54:09.111910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.111928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:41.490 [2024-11-07 09:54:09.112482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:41.490 [2024-11-07 09:54:09.112502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.112508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:41.490 [2024-11-07 09:54:09.112515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.580 ms 00:25:41.490 [2024-11-07 09:54:09.112521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.113543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:41.490 [2024-11-07 09:54:09.123359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.123391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:41.490 [2024-11-07 09:54:09.123404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.817 ms 00:25:41.490 [2024-11-07 09:54:09.123410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.123462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.123470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:41.490 [2024-11-07 09:54:09.123476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:41.490 [2024-11-07 09:54:09.123481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.128201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.128235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:41.490 [2024-11-07 09:54:09.128243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.668 ms 00:25:41.490 [2024-11-07 09:54:09.128249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.128295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.128302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:41.490 [2024-11-07 09:54:09.128309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:25:41.490 [2024-11-07 09:54:09.128315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.128352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.128359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:41.490 [2024-11-07 09:54:09.128368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:41.490 [2024-11-07 09:54:09.128374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.128392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:41.490 [2024-11-07 09:54:09.131028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.490 [2024-11-07 09:54:09.131054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:41.490 [2024-11-07 09:54:09.131061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.642 ms 00:25:41.490 [2024-11-07 09:54:09.131070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.490 [2024-11-07 09:54:09.131093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.491 [2024-11-07 09:54:09.131100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:41.491 [2024-11-07 09:54:09.131106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:41.491 [2024-11-07 09:54:09.131112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.491 [2024-11-07 09:54:09.131131] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:41.491 [2024-11-07 09:54:09.131146] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:41.491 [2024-11-07 09:54:09.131176] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:41.491 [2024-11-07 09:54:09.131187] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:25:41.491 [2024-11-07 09:54:09.131265] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:41.491 [2024-11-07 09:54:09.131273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:41.491 [2024-11-07 09:54:09.131296] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:41.491 [2024-11-07 09:54:09.131304] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131311] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131319] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:41.491 [2024-11-07 09:54:09.131325] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:41.491 [2024-11-07 09:54:09.131331] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:41.491 [2024-11-07 09:54:09.131336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:41.491 [2024-11-07 09:54:09.131343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.491 [2024-11-07 09:54:09.131348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:41.491 [2024-11-07 09:54:09.131354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:25:41.491 [2024-11-07 09:54:09.131359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.491 [2024-11-07 09:54:09.131424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.491 [2024-11-07 09:54:09.131431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:41.491 [2024-11-07 09:54:09.131436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:25:41.491 [2024-11-07 09:54:09.131444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.491 [2024-11-07 09:54:09.131519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:41.491 [2024-11-07 09:54:09.131527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:41.491 [2024-11-07 09:54:09.131533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:41.491 [2024-11-07 09:54:09.131550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:41.491 [2024-11-07 09:54:09.131560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:41.491 [2024-11-07 09:54:09.131565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:41.491 [2024-11-07 09:54:09.131570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:41.491 [2024-11-07 09:54:09.131580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:41.491 [2024-11-07 09:54:09.131584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:41.491 [2024-11-07 09:54:09.131596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:41.491 [2024-11-07 09:54:09.131601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:41.491 [2024-11-07 09:54:09.131611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:41.491 [2024-11-07 09:54:09.131616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:41.491 [2024-11-07 09:54:09.131637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:41.491 [2024-11-07 09:54:09.131642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:41.491 [2024-11-07 09:54:09.131652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:41.491 [2024-11-07 09:54:09.131657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:41.491 [2024-11-07 09:54:09.131673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:41.491 [2024-11-07 09:54:09.131678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:41.491 [2024-11-07 09:54:09.131687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:41.491 [2024-11-07 09:54:09.131692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:41.491 [2024-11-07 09:54:09.131702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:41.491 [2024-11-07 09:54:09.131707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:41.491 [2024-11-07 09:54:09.131717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:41.491 [2024-11-07 09:54:09.131732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:41.491 [2024-11-07 09:54:09.131746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:41.491 [2024-11-07 09:54:09.131751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131755] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:41.491 [2024-11-07 09:54:09.131761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:41.491 [2024-11-07 09:54:09.131773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:41.491 [2024-11-07 09:54:09.131786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:41.491 [2024-11-07 09:54:09.131791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:41.491 [2024-11-07 09:54:09.131796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:41.491 [2024-11-07 09:54:09.131801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:41.491 [2024-11-07 09:54:09.131805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:41.491 [2024-11-07 09:54:09.131810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:41.491 [2024-11-07 09:54:09.131816] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:41.491 [2024-11-07 09:54:09.131822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:41.491 [2024-11-07 09:54:09.131834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:41.491 [2024-11-07 09:54:09.131850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:41.491 [2024-11-07 09:54:09.131855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:41.491 [2024-11-07 09:54:09.131860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:41.491 [2024-11-07 09:54:09.131865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:41.491 [2024-11-07 09:54:09.131902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:41.491 [2024-11-07 09:54:09.131908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:41.491 [2024-11-07 09:54:09.131919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:41.491 [2024-11-07 09:54:09.131925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:41.492 [2024-11-07 09:54:09.131930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:41.492 [2024-11-07 09:54:09.131935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:41.492 [2024-11-07 09:54:09.131940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:41.492 [2024-11-07 09:54:09.131948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.468 ms 00:25:41.492 [2024-11-07 09:54:09.131953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:41.492 [2024-11-07 09:54:09.131984] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:41.492 [2024-11-07 09:54:09.131992] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:44.787 [2024-11-07 09:54:11.954438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:11.954500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:44.787 [2024-11-07 09:54:11.954515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2822.441 ms 00:25:44.787 [2024-11-07 09:54:11.954523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:11.979881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:11.979923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:44.787 [2024-11-07 09:54:11.979935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.159 ms 00:25:44.787 [2024-11-07 09:54:11.979942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:11.980019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:11.980033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:44.787 [2024-11-07 09:54:11.980041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:44.787 [2024-11-07 09:54:11.980049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.010116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.010156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:44.787 [2024-11-07 09:54:12.010168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.030 ms 00:25:44.787 [2024-11-07 09:54:12.010178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.010208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.010215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:44.787 [2024-11-07 09:54:12.010223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:44.787 [2024-11-07 09:54:12.010231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.010599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.010623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:44.787 [2024-11-07 09:54:12.010644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:25:44.787 [2024-11-07 09:54:12.010652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.010697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.010705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:44.787 [2024-11-07 09:54:12.010714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:44.787 [2024-11-07 09:54:12.010720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.024757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.024789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:44.787 [2024-11-07 09:54:12.024799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.016 ms 00:25:44.787 [2024-11-07 09:54:12.024807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.037554] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:44.787 [2024-11-07 09:54:12.037587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:44.787 [2024-11-07 09:54:12.037599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.037607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:44.787 [2024-11-07 09:54:12.037616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.693 ms 00:25:44.787 [2024-11-07 09:54:12.037623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.051274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.051312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:44.787 [2024-11-07 09:54:12.051322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.622 ms 00:25:44.787 [2024-11-07 09:54:12.051330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.062902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.062932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:44.787 [2024-11-07 09:54:12.062941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.534 ms 00:25:44.787 [2024-11-07 09:54:12.062948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.074558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.074587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:44.787 [2024-11-07 09:54:12.074596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.590 ms 00:25:44.787 [2024-11-07 09:54:12.074603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.075205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.075231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:44.787 [2024-11-07 09:54:12.075240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:25:44.787 [2024-11-07 09:54:12.075247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.137152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.137207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:44.787 [2024-11-07 09:54:12.137221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 61.885 ms 00:25:44.787 [2024-11-07 09:54:12.137230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.147409] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:44.787 [2024-11-07 09:54:12.148237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.148265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:44.787 [2024-11-07 09:54:12.148277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.945 ms 00:25:44.787 [2024-11-07 09:54:12.148285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.148383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.148397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:44.787 [2024-11-07 09:54:12.148407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:44.787 [2024-11-07 09:54:12.148414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.148467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.148477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:44.787 [2024-11-07 09:54:12.148485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:44.787 [2024-11-07 09:54:12.148492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.148512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.148519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:44.787 [2024-11-07 09:54:12.148527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:44.787 [2024-11-07 09:54:12.148537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.148567] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:44.787 [2024-11-07 09:54:12.148577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.148584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:44.787 [2024-11-07 09:54:12.148591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:44.787 [2024-11-07 09:54:12.148599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.171558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.171598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:44.787 [2024-11-07 09:54:12.171609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.939 ms 00:25:44.787 [2024-11-07 09:54:12.171617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.171696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:44.787 [2024-11-07 09:54:12.171706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:44.787 [2024-11-07 09:54:12.171714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:25:44.787 [2024-11-07 09:54:12.171721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:44.787 [2024-11-07 09:54:12.172904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3060.687 ms, result 0 00:25:44.787 [2024-11-07 09:54:12.187923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.787 [2024-11-07 09:54:12.203909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:44.787 [2024-11-07 09:54:12.212045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:44.787 09:54:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:44.787 09:54:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:25:44.787 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:44.787 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:44.787 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:45.049 [2024-11-07 09:54:12.596295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:45.049 [2024-11-07 09:54:12.596338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:45.049 [2024-11-07 09:54:12.596353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:45.049 [2024-11-07 09:54:12.596364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:45.049 [2024-11-07 09:54:12.596387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:45.049 [2024-11-07 09:54:12.596396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:45.049 [2024-11-07 09:54:12.596404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:45.049 [2024-11-07 09:54:12.596411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:45.049 [2024-11-07 09:54:12.596431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:45.049 [2024-11-07 09:54:12.596438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:45.049 [2024-11-07 09:54:12.596447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:45.049 [2024-11-07 09:54:12.596454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:45.049 [2024-11-07 09:54:12.596512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.207 ms, result 0 00:25:45.049 true 00:25:45.049 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:45.310 { 00:25:45.310 "name": "ftl", 00:25:45.310 "properties": [ 00:25:45.310 { 00:25:45.310 "name": "superblock_version", 00:25:45.310 "value": 5, 00:25:45.310 "read-only": true 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "name": "base_device", 00:25:45.310 "bands": [ 00:25:45.310 { 00:25:45.310 "id": 0, 00:25:45.310 "state": "CLOSED", 00:25:45.310 "validity": 1.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 1, 00:25:45.310 "state": "CLOSED", 00:25:45.310 "validity": 1.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 2, 00:25:45.310 "state": "CLOSED", 00:25:45.310 "validity": 0.007843137254901933 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 3, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 4, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 5, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 6, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 7, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 8, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 9, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 10, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 11, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 12, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 13, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 14, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 15, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 16, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 17, 00:25:45.310 "state": "FREE", 00:25:45.310 "validity": 0.0 00:25:45.310 } 00:25:45.310 ], 00:25:45.310 "read-only": true 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "name": "cache_device", 00:25:45.310 "type": "bdev", 00:25:45.310 "chunks": [ 00:25:45.310 { 00:25:45.310 "id": 0, 00:25:45.310 "state": "INACTIVE", 00:25:45.310 "utilization": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 1, 00:25:45.310 "state": "OPEN", 00:25:45.310 "utilization": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 2, 00:25:45.310 "state": "OPEN", 00:25:45.310 "utilization": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 3, 00:25:45.310 "state": "FREE", 00:25:45.310 "utilization": 0.0 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "id": 4, 00:25:45.310 "state": "FREE", 00:25:45.310 "utilization": 0.0 00:25:45.310 } 00:25:45.310 ], 00:25:45.310 "read-only": true 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "name": "verbose_mode", 00:25:45.310 "value": true, 00:25:45.310 "unit": "", 00:25:45.310 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:45.310 }, 00:25:45.310 { 00:25:45.310 "name": "prep_upgrade_on_shutdown", 00:25:45.310 "value": false, 00:25:45.310 "unit": "", 00:25:45.310 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:45.310 } 00:25:45.310 ] 00:25:45.310 } 00:25:45.311 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:45.311 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:45.311 09:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:45.571 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:45.571 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:45.571 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:45.571 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:45.571 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:45.833 Validate MD5 checksum, iteration 1 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:45.833 09:54:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:45.833 [2024-11-07 09:54:13.310704] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:45.833 [2024-11-07 09:54:13.310820] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79539 ] 00:25:45.833 [2024-11-07 09:54:13.470299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.097 [2024-11-07 09:54:13.571597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.491  [2024-11-07T09:54:15.733Z] Copying: 656/1024 [MB] (656 MBps) [2024-11-07T09:54:17.116Z] Copying: 1024/1024 [MB] (average 657 MBps) 00:25:49.445 00:25:49.445 09:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:49.445 09:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ce1316c788b6791a2424553ea844e7f1 00:25:51.349 Validate MD5 checksum, iteration 2 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ce1316c788b6791a2424553ea844e7f1 != \c\e\1\3\1\6\c\7\8\8\b\6\7\9\1\a\2\4\2\4\5\5\3\e\a\8\4\4\e\7\f\1 ]] 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:51.349 09:54:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:51.349 [2024-11-07 09:54:18.973013] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:51.349 [2024-11-07 09:54:18.973125] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79607 ] 00:25:51.608 [2024-11-07 09:54:19.133624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.608 [2024-11-07 09:54:19.231121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.510  [2024-11-07T09:54:21.181Z] Copying: 723/1024 [MB] (723 MBps) [2024-11-07T09:54:24.474Z] Copying: 1024/1024 [MB] (average 728 MBps) 00:25:56.803 00:25:56.803 09:54:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:56.803 09:54:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd474a2526e94e064290d623aefa30bb 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd474a2526e94e064290d623aefa30bb != \f\d\4\7\4\a\2\5\2\6\e\9\4\e\0\6\4\2\9\0\d\6\2\3\a\e\f\a\3\0\b\b ]] 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 79470 ]] 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 79470 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:58.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79685 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79685 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79685 ']' 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:58.706 09:54:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:58.707 [2024-11-07 09:54:26.123254] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:58.707 [2024-11-07 09:54:26.123373] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79685 ] 00:25:58.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 79470 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:25:58.707 [2024-11-07 09:54:26.276484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.707 [2024-11-07 09:54:26.359112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.273 [2024-11-07 09:54:26.937636] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:59.273 [2024-11-07 09:54:26.937689] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:59.533 [2024-11-07 09:54:27.080616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.080667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:59.533 [2024-11-07 09:54:27.080678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:59.533 [2024-11-07 09:54:27.080685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.080723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.080731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:59.533 [2024-11-07 09:54:27.080738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:59.533 [2024-11-07 09:54:27.080744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.080761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:59.533 [2024-11-07 09:54:27.081295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:59.533 [2024-11-07 09:54:27.081307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.081313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:59.533 [2024-11-07 09:54:27.081320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:25:59.533 [2024-11-07 09:54:27.081325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.081538] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:59.533 [2024-11-07 09:54:27.094039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.094151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:59.533 [2024-11-07 09:54:27.094208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.501 ms 00:25:59.533 [2024-11-07 09:54:27.094226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.100851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.100939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:59.533 [2024-11-07 09:54:27.100989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:59.533 [2024-11-07 09:54:27.101007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.101258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.101331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:59.533 [2024-11-07 09:54:27.101403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:25:59.533 [2024-11-07 09:54:27.101422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.101473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.101525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:59.533 [2024-11-07 09:54:27.101543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:59.533 [2024-11-07 09:54:27.101558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.101612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.533 [2024-11-07 09:54:27.101639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:59.533 [2024-11-07 09:54:27.101656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:59.533 [2024-11-07 09:54:27.101700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.533 [2024-11-07 09:54:27.101731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:59.534 [2024-11-07 09:54:27.104090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.534 [2024-11-07 09:54:27.104177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:59.534 [2024-11-07 09:54:27.104220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.364 ms 00:25:59.534 [2024-11-07 09:54:27.104236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.534 [2024-11-07 09:54:27.104272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.534 [2024-11-07 09:54:27.104370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:59.534 [2024-11-07 09:54:27.104389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:59.534 [2024-11-07 09:54:27.104404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.534 [2024-11-07 09:54:27.104428] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:59.534 [2024-11-07 09:54:27.104453] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:59.534 [2024-11-07 09:54:27.104496] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:59.534 [2024-11-07 09:54:27.104592] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:25:59.534 [2024-11-07 09:54:27.104696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:59.534 [2024-11-07 09:54:27.104722] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:59.534 [2024-11-07 09:54:27.104809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:59.534 [2024-11-07 09:54:27.104818] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:59.534 [2024-11-07 09:54:27.104825] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:59.534 [2024-11-07 09:54:27.104832] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:59.534 [2024-11-07 09:54:27.104837] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:59.534 [2024-11-07 09:54:27.104843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:59.534 [2024-11-07 09:54:27.104849] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:59.534 [2024-11-07 09:54:27.104855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.534 [2024-11-07 09:54:27.104863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:59.534 [2024-11-07 09:54:27.104869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:25:59.534 [2024-11-07 09:54:27.104875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.534 [2024-11-07 09:54:27.104945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.534 [2024-11-07 09:54:27.104951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:59.534 [2024-11-07 09:54:27.104957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:25:59.534 [2024-11-07 09:54:27.104963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.534 [2024-11-07 09:54:27.105047] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:59.534 [2024-11-07 09:54:27.105055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:59.534 [2024-11-07 09:54:27.105065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:59.534 [2024-11-07 09:54:27.105082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:59.534 [2024-11-07 09:54:27.105092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:59.534 [2024-11-07 09:54:27.105097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:59.534 [2024-11-07 09:54:27.105102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:59.534 [2024-11-07 09:54:27.105112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:59.534 [2024-11-07 09:54:27.105117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:59.534 [2024-11-07 09:54:27.105127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:59.534 [2024-11-07 09:54:27.105132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:59.534 [2024-11-07 09:54:27.105142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:59.534 [2024-11-07 09:54:27.105146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:59.534 [2024-11-07 09:54:27.105156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:59.534 [2024-11-07 09:54:27.105161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:59.534 [2024-11-07 09:54:27.105178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:59.534 [2024-11-07 09:54:27.105183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:59.534 [2024-11-07 09:54:27.105192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:59.534 [2024-11-07 09:54:27.105197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:59.534 [2024-11-07 09:54:27.105207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:59.534 [2024-11-07 09:54:27.105212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:59.534 [2024-11-07 09:54:27.105221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:59.534 [2024-11-07 09:54:27.105226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:59.534 [2024-11-07 09:54:27.105236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:59.534 [2024-11-07 09:54:27.105250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:59.534 [2024-11-07 09:54:27.105264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:59.534 [2024-11-07 09:54:27.105268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105273] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:59.534 [2024-11-07 09:54:27.105279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:59.534 [2024-11-07 09:54:27.105284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.534 [2024-11-07 09:54:27.105295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:59.534 [2024-11-07 09:54:27.105300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:59.534 [2024-11-07 09:54:27.105305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:59.534 [2024-11-07 09:54:27.105310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:59.534 [2024-11-07 09:54:27.105315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:59.534 [2024-11-07 09:54:27.105319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:59.534 [2024-11-07 09:54:27.105326] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:59.534 [2024-11-07 09:54:27.105333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.534 [2024-11-07 09:54:27.105340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:59.534 [2024-11-07 09:54:27.105346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:59.534 [2024-11-07 09:54:27.105352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:59.534 [2024-11-07 09:54:27.105358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:59.534 [2024-11-07 09:54:27.105363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:59.534 [2024-11-07 09:54:27.105368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:59.534 [2024-11-07 09:54:27.105373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:59.535 [2024-11-07 09:54:27.105379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:59.535 [2024-11-07 09:54:27.105416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:59.535 [2024-11-07 09:54:27.105422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:59.535 [2024-11-07 09:54:27.105434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:59.535 [2024-11-07 09:54:27.105439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:59.535 [2024-11-07 09:54:27.105444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:59.535 [2024-11-07 09:54:27.105450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.105458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:59.535 [2024-11-07 09:54:27.105463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.456 ms 00:25:59.535 [2024-11-07 09:54:27.105469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.124395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.124483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:59.535 [2024-11-07 09:54:27.124520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.888 ms 00:25:59.535 [2024-11-07 09:54:27.124538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.124575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.124591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:59.535 [2024-11-07 09:54:27.124607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:59.535 [2024-11-07 09:54:27.124621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.148266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.148359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:59.535 [2024-11-07 09:54:27.148399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.581 ms 00:25:59.535 [2024-11-07 09:54:27.148417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.148452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.148468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:59.535 [2024-11-07 09:54:27.148483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:59.535 [2024-11-07 09:54:27.148498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.148581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.148602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:59.535 [2024-11-07 09:54:27.148618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:59.535 [2024-11-07 09:54:27.148678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.148723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.148740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:59.535 [2024-11-07 09:54:27.148754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:59.535 [2024-11-07 09:54:27.148798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.160127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.160209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:59.535 [2024-11-07 09:54:27.160247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.301 ms 00:25:59.535 [2024-11-07 09:54:27.160264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.160350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.160371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:59.535 [2024-11-07 09:54:27.160386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:59.535 [2024-11-07 09:54:27.160400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.185286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.185390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:59.535 [2024-11-07 09:54:27.185433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.862 ms 00:25:59.535 [2024-11-07 09:54:27.185452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.535 [2024-11-07 09:54:27.192674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.535 [2024-11-07 09:54:27.192755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:59.535 [2024-11-07 09:54:27.192805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:25:59.535 [2024-11-07 09:54:27.192822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.794 [2024-11-07 09:54:27.235835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.794 [2024-11-07 09:54:27.235976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:59.794 [2024-11-07 09:54:27.236027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.961 ms 00:25:59.794 [2024-11-07 09:54:27.236045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.794 [2024-11-07 09:54:27.236156] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:59.794 [2024-11-07 09:54:27.236255] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:59.794 [2024-11-07 09:54:27.236347] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:59.794 [2024-11-07 09:54:27.236493] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:59.794 [2024-11-07 09:54:27.236517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.794 [2024-11-07 09:54:27.236532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:59.794 [2024-11-07 09:54:27.236577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:25:59.794 [2024-11-07 09:54:27.236595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.794 [2024-11-07 09:54:27.236661] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:59.794 [2024-11-07 09:54:27.236745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.794 [2024-11-07 09:54:27.236764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:59.794 [2024-11-07 09:54:27.236780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:25:59.794 [2024-11-07 09:54:27.236794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.794 [2024-11-07 09:54:27.248271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.794 [2024-11-07 09:54:27.248370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:59.794 [2024-11-07 09:54:27.248409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.450 ms 00:25:59.794 [2024-11-07 09:54:27.248427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.794 [2024-11-07 09:54:27.254953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.794 [2024-11-07 09:54:27.255030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:59.794 [2024-11-07 09:54:27.255067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:59.794 [2024-11-07 09:54:27.255084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.794 [2024-11-07 09:54:27.255159] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:25:59.794 [2024-11-07 09:54:27.255337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.794 [2024-11-07 09:54:27.255438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:25:59.794 [2024-11-07 09:54:27.255458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:25:59.794 [2024-11-07 09:54:27.255474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.053 [2024-11-07 09:54:27.707826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.053 [2024-11-07 09:54:27.708034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:00.053 [2024-11-07 09:54:27.708097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 451.676 ms 00:26:00.053 [2024-11-07 09:54:27.708122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.053 [2024-11-07 09:54:27.711930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.053 [2024-11-07 09:54:27.712040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:00.053 [2024-11-07 09:54:27.712100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:26:00.053 [2024-11-07 09:54:27.712123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.053 [2024-11-07 09:54:27.712495] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:00.053 [2024-11-07 09:54:27.712608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.053 [2024-11-07 09:54:27.712680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:00.053 [2024-11-07 09:54:27.712706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:26:00.053 [2024-11-07 09:54:27.713008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.053 [2024-11-07 09:54:27.713367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.053 [2024-11-07 09:54:27.713572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:00.053 [2024-11-07 09:54:27.713783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:00.053 [2024-11-07 09:54:27.713950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.053 [2024-11-07 09:54:27.714240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 459.031 ms, result 0 00:26:00.053 [2024-11-07 09:54:27.714573] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:00.053 [2024-11-07 09:54:27.714946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.053 [2024-11-07 09:54:27.715126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:00.053 [2024-11-07 09:54:27.715350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:26:00.053 [2024-11-07 09:54:27.715508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.621 [2024-11-07 09:54:28.154468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.621 [2024-11-07 09:54:28.154906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:00.621 [2024-11-07 09:54:28.154960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 436.288 ms 00:26:00.621 [2024-11-07 09:54:28.154984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.621 [2024-11-07 09:54:28.159250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.621 [2024-11-07 09:54:28.159289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:00.621 [2024-11-07 09:54:28.159299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.796 ms 00:26:00.621 [2024-11-07 09:54:28.159306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.621 [2024-11-07 09:54:28.159621] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:00.621 [2024-11-07 09:54:28.159657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.621 [2024-11-07 09:54:28.159665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:00.621 [2024-11-07 09:54:28.159673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:26:00.621 [2024-11-07 09:54:28.159680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.621 [2024-11-07 09:54:28.159707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.621 [2024-11-07 09:54:28.159715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:00.621 [2024-11-07 09:54:28.159722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:00.621 [2024-11-07 09:54:28.159728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.621 [2024-11-07 09:54:28.159762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 445.224 ms, result 0 00:26:00.621 [2024-11-07 09:54:28.159800] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:00.621 [2024-11-07 09:54:28.159814] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:00.621 [2024-11-07 09:54:28.159824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.621 [2024-11-07 09:54:28.159831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:00.621 [2024-11-07 09:54:28.159839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 904.677 ms 00:26:00.622 [2024-11-07 09:54:28.159846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.159875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.159883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:00.622 [2024-11-07 09:54:28.159895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:00.622 [2024-11-07 09:54:28.159902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.170394] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:00.622 [2024-11-07 09:54:28.170587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.170601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:00.622 [2024-11-07 09:54:28.170610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.670 ms 00:26:00.622 [2024-11-07 09:54:28.170618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.171308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.171327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:00.622 [2024-11-07 09:54:28.171339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:26:00.622 [2024-11-07 09:54:28.171346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.173566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.173680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:00.622 [2024-11-07 09:54:28.173693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.204 ms 00:26:00.622 [2024-11-07 09:54:28.173701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.173744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.173753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:00.622 [2024-11-07 09:54:28.173761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:00.622 [2024-11-07 09:54:28.173770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.173869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.173878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:00.622 [2024-11-07 09:54:28.173886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:00.622 [2024-11-07 09:54:28.173893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.173912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.173920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:00.622 [2024-11-07 09:54:28.173927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:00.622 [2024-11-07 09:54:28.173934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.173959] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:00.622 [2024-11-07 09:54:28.173971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.173978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:00.622 [2024-11-07 09:54:28.173985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:00.622 [2024-11-07 09:54:28.173992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.174042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.622 [2024-11-07 09:54:28.174051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:00.622 [2024-11-07 09:54:28.174059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:00.622 [2024-11-07 09:54:28.174066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.622 [2024-11-07 09:54:28.174866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1093.822 ms, result 0 00:26:00.622 [2024-11-07 09:54:28.187206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.622 [2024-11-07 09:54:28.203191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:00.622 [2024-11-07 09:54:28.211333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:01.212 Validate MD5 checksum, iteration 1 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:01.212 09:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:01.212 [2024-11-07 09:54:28.719811] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:01.212 [2024-11-07 09:54:28.720089] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79719 ] 00:26:01.509 [2024-11-07 09:54:28.880458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.509 [2024-11-07 09:54:28.977460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.887  [2024-11-07T09:54:31.127Z] Copying: 696/1024 [MB] (696 MBps) [2024-11-07T09:54:34.421Z] Copying: 1024/1024 [MB] (average 695 MBps) 00:26:06.750 00:26:06.750 09:54:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:06.750 09:54:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:08.656 Validate MD5 checksum, iteration 2 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ce1316c788b6791a2424553ea844e7f1 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ce1316c788b6791a2424553ea844e7f1 != \c\e\1\3\1\6\c\7\8\8\b\6\7\9\1\a\2\4\2\4\5\5\3\e\a\8\4\4\e\7\f\1 ]] 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:08.656 09:54:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:08.656 [2024-11-07 09:54:35.926696] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:08.656 [2024-11-07 09:54:35.926793] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79797 ] 00:26:08.656 [2024-11-07 09:54:36.080945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.656 [2024-11-07 09:54:36.177207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.031  [2024-11-07T09:54:38.311Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-07T09:54:40.215Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:26:12.544 00:26:12.544 09:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:12.544 09:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd474a2526e94e064290d623aefa30bb 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd474a2526e94e064290d623aefa30bb != \f\d\4\7\4\a\2\5\2\6\e\9\4\e\0\6\4\2\9\0\d\6\2\3\a\e\f\a\3\0\b\b ]] 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79685 ]] 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79685 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79685 ']' 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79685 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79685 00:26:15.081 killing process with pid 79685 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79685' 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79685 00:26:15.081 09:54:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79685 00:26:15.341 [2024-11-07 09:54:42.837033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:15.341 [2024-11-07 09:54:42.848937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.848974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:15.341 [2024-11-07 09:54:42.848984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:15.341 [2024-11-07 09:54:42.848991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.849008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:15.341 [2024-11-07 09:54:42.851031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.851058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:15.341 [2024-11-07 09:54:42.851066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.012 ms 00:26:15.341 [2024-11-07 09:54:42.851076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.851284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.851295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:15.341 [2024-11-07 09:54:42.851302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:26:15.341 [2024-11-07 09:54:42.851308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.852246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.852360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:15.341 [2024-11-07 09:54:42.852371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.924 ms 00:26:15.341 [2024-11-07 09:54:42.852378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.853256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.853273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:15.341 [2024-11-07 09:54:42.853280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:26:15.341 [2024-11-07 09:54:42.853285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.860683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.860712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:15.341 [2024-11-07 09:54:42.860720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.370 ms 00:26:15.341 [2024-11-07 09:54:42.860726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.865096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.865123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:15.341 [2024-11-07 09:54:42.865132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.336 ms 00:26:15.341 [2024-11-07 09:54:42.865139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.865211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.865220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:15.341 [2024-11-07 09:54:42.865226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:15.341 [2024-11-07 09:54:42.865232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.873239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.873272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:15.341 [2024-11-07 09:54:42.873281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.989 ms 00:26:15.341 [2024-11-07 09:54:42.873286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.880682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.880710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:15.341 [2024-11-07 09:54:42.880717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.365 ms 00:26:15.341 [2024-11-07 09:54:42.880723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.887888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.887912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:15.341 [2024-11-07 09:54:42.887920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.139 ms 00:26:15.341 [2024-11-07 09:54:42.887925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.895255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.341 [2024-11-07 09:54:42.895423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:15.341 [2024-11-07 09:54:42.895436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.281 ms 00:26:15.341 [2024-11-07 09:54:42.895442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.341 [2024-11-07 09:54:42.895467] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:15.341 [2024-11-07 09:54:42.895479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:15.341 [2024-11-07 09:54:42.895486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:15.341 [2024-11-07 09:54:42.895492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:15.341 [2024-11-07 09:54:42.895499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:15.341 [2024-11-07 09:54:42.895505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:15.341 [2024-11-07 09:54:42.895511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:15.342 [2024-11-07 09:54:42.895585] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:15.342 [2024-11-07 09:54:42.895591] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: af303546-c6d2-4200-836f-b9c072444d67 00:26:15.342 [2024-11-07 09:54:42.895597] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:15.342 [2024-11-07 09:54:42.895602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:15.342 [2024-11-07 09:54:42.895607] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:15.342 [2024-11-07 09:54:42.895613] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:15.342 [2024-11-07 09:54:42.895619] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:15.342 [2024-11-07 09:54:42.895624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:15.342 [2024-11-07 09:54:42.895644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:15.342 [2024-11-07 09:54:42.895649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:15.342 [2024-11-07 09:54:42.895654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:15.342 [2024-11-07 09:54:42.895660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.342 [2024-11-07 09:54:42.895667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:15.342 [2024-11-07 09:54:42.895677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:26:15.342 [2024-11-07 09:54:42.895684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:42.906135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.342 [2024-11-07 09:54:42.906257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:15.342 [2024-11-07 09:54:42.906271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.436 ms 00:26:15.342 [2024-11-07 09:54:42.906277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:42.906585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.342 [2024-11-07 09:54:42.906602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:15.342 [2024-11-07 09:54:42.906613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:26:15.342 [2024-11-07 09:54:42.906623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:42.943831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.342 [2024-11-07 09:54:42.943866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:15.342 [2024-11-07 09:54:42.943876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.342 [2024-11-07 09:54:42.943884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:42.944792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.342 [2024-11-07 09:54:42.944932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:15.342 [2024-11-07 09:54:42.944945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.342 [2024-11-07 09:54:42.944952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:42.945029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.342 [2024-11-07 09:54:42.945037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:15.342 [2024-11-07 09:54:42.945044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.342 [2024-11-07 09:54:42.945050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:42.945063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.342 [2024-11-07 09:54:42.945073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:15.342 [2024-11-07 09:54:42.945079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.342 [2024-11-07 09:54:42.945085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.342 [2024-11-07 09:54:43.004325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.342 [2024-11-07 09:54:43.004477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:15.342 [2024-11-07 09:54:43.004491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.342 [2024-11-07 09:54:43.004498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:15.601 [2024-11-07 09:54:43.054471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:15.601 [2024-11-07 09:54:43.054552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:15.601 [2024-11-07 09:54:43.054617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:15.601 [2024-11-07 09:54:43.054750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:15.601 [2024-11-07 09:54:43.054794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:15.601 [2024-11-07 09:54:43.054842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:15.601 [2024-11-07 09:54:43.054885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:15.601 [2024-11-07 09:54:43.054893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:15.601 [2024-11-07 09:54:43.054899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.601 [2024-11-07 09:54:43.054987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 206.029 ms, result 0 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:16.167 Remove shared memory files 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:16.167 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:16.168 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79470 00:26:16.168 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:16.168 09:54:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:16.168 ************************************ 00:26:16.168 END TEST ftl_upgrade_shutdown 00:26:16.168 ************************************ 00:26:16.168 00:26:16.168 real 1m22.491s 00:26:16.168 user 1m54.703s 00:26:16.168 sys 0m17.793s 00:26:16.168 09:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:16.168 09:54:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@14 -- # killprocess 72325 00:26:16.168 Process with pid 72325 is not found 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@952 -- # '[' -z 72325 ']' 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@956 -- # kill -0 72325 00:26:16.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72325) - No such process 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72325 is not found' 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79918 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.168 09:54:43 ftl -- ftl/ftl.sh@20 -- # waitforlisten 79918 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@833 -- # '[' -z 79918 ']' 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:16.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:16.168 09:54:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:16.168 [2024-11-07 09:54:43.818835] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:16.168 [2024-11-07 09:54:43.818954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79918 ] 00:26:16.461 [2024-11-07 09:54:43.973853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.461 [2024-11-07 09:54:44.053364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.028 09:54:44 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.028 09:54:44 ftl -- common/autotest_common.sh@866 -- # return 0 00:26:17.028 09:54:44 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:17.286 nvme0n1 00:26:17.286 09:54:44 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:26:17.286 09:54:44 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:17.286 09:54:44 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:17.544 09:54:45 ftl -- ftl/common.sh@28 -- # stores=15d36886-4c77-448f-918e-1560ac4937bc 00:26:17.544 09:54:45 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:26:17.545 09:54:45 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15d36886-4c77-448f-918e-1560ac4937bc 00:26:17.803 09:54:45 ftl -- ftl/ftl.sh@23 -- # killprocess 79918 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@952 -- # '[' -z 79918 ']' 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@956 -- # kill -0 79918 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@957 -- # uname 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79918 00:26:17.803 killing process with pid 79918 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79918' 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@971 -- # kill 79918 00:26:17.803 09:54:45 ftl -- common/autotest_common.sh@976 -- # wait 79918 00:26:19.179 09:54:46 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:19.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:19.437 Waiting for block devices as requested 00:26:19.437 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:19.437 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:19.437 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:19.437 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:24.711 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:24.711 Remove shared memory files 00:26:24.711 09:54:52 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:26:24.711 09:54:52 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:24.711 09:54:52 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:26:24.711 09:54:52 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:26:24.711 09:54:52 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:26:24.711 09:54:52 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:24.711 09:54:52 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:26:24.711 ************************************ 00:26:24.711 END TEST ftl 00:26:24.712 ************************************ 00:26:24.712 00:26:24.712 real 11m7.085s 00:26:24.712 user 13m34.311s 00:26:24.712 sys 1m13.720s 00:26:24.712 09:54:52 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:24.712 09:54:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:24.712 09:54:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:24.712 09:54:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:24.712 09:54:52 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:24.712 09:54:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:24.712 09:54:52 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:24.712 09:54:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:24.712 09:54:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:24.712 09:54:52 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:24.712 09:54:52 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:24.712 09:54:52 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:24.712 09:54:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.712 09:54:52 -- common/autotest_common.sh@10 -- # set +x 00:26:24.712 09:54:52 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:24.712 09:54:52 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:26:24.712 09:54:52 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:26:24.712 09:54:52 -- common/autotest_common.sh@10 -- # set +x 00:26:26.087 INFO: APP EXITING 00:26:26.087 INFO: killing all VMs 00:26:26.087 INFO: killing vhost app 00:26:26.087 INFO: EXIT DONE 00:26:26.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:26.345 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:26.345 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:26.345 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:26:26.345 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:26:26.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:26.861 Cleaning 00:26:26.861 Removing: /var/run/dpdk/spdk0/config 00:26:26.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:27.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:27.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:27.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:27.120 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:27.120 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:27.120 Removing: /var/run/dpdk/spdk0 00:26:27.120 Removing: /var/run/dpdk/spdk_pid56974 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57176 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57400 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57493 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57532 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57649 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57667 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57861 00:26:27.120 Removing: /var/run/dpdk/spdk_pid57959 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58050 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58155 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58247 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58287 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58324 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58400 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58495 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58931 00:26:27.120 Removing: /var/run/dpdk/spdk_pid58995 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59047 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59063 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59160 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59171 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59272 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59288 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59347 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59359 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59418 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59436 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59596 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59628 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59716 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59888 00:26:27.120 Removing: /var/run/dpdk/spdk_pid59972 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60008 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60436 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60531 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60653 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60706 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60726 00:26:27.120 Removing: /var/run/dpdk/spdk_pid60810 00:26:27.120 Removing: /var/run/dpdk/spdk_pid61435 00:26:27.120 Removing: /var/run/dpdk/spdk_pid61473 00:26:27.120 Removing: /var/run/dpdk/spdk_pid61945 00:26:27.120 Removing: /var/run/dpdk/spdk_pid62038 00:26:27.120 Removing: /var/run/dpdk/spdk_pid62158 00:26:27.120 Removing: /var/run/dpdk/spdk_pid62211 00:26:27.120 Removing: /var/run/dpdk/spdk_pid62242 00:26:27.120 Removing: /var/run/dpdk/spdk_pid62267 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64108 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64240 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64249 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64261 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64306 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64310 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64322 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64368 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64372 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64384 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64430 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64434 00:26:27.121 Removing: /var/run/dpdk/spdk_pid64446 00:26:27.121 Removing: /var/run/dpdk/spdk_pid65821 00:26:27.121 Removing: /var/run/dpdk/spdk_pid65920 00:26:27.121 Removing: /var/run/dpdk/spdk_pid67325 00:26:27.121 Removing: /var/run/dpdk/spdk_pid68692 00:26:27.121 Removing: /var/run/dpdk/spdk_pid68785 00:26:27.121 Removing: /var/run/dpdk/spdk_pid68861 00:26:27.121 Removing: /var/run/dpdk/spdk_pid68937 00:26:27.121 Removing: /var/run/dpdk/spdk_pid69031 00:26:27.121 Removing: /var/run/dpdk/spdk_pid69105 00:26:27.121 Removing: /var/run/dpdk/spdk_pid69247 00:26:27.121 Removing: /var/run/dpdk/spdk_pid69604 00:26:27.121 Removing: /var/run/dpdk/spdk_pid69642 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70095 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70276 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70375 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70490 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70540 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70564 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70851 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70906 00:26:27.121 Removing: /var/run/dpdk/spdk_pid70977 00:26:27.121 Removing: /var/run/dpdk/spdk_pid71368 00:26:27.121 Removing: /var/run/dpdk/spdk_pid71515 00:26:27.121 Removing: /var/run/dpdk/spdk_pid72325 00:26:27.121 Removing: /var/run/dpdk/spdk_pid72457 00:26:27.121 Removing: /var/run/dpdk/spdk_pid72632 00:26:27.121 Removing: /var/run/dpdk/spdk_pid72727 00:26:27.121 Removing: /var/run/dpdk/spdk_pid73046 00:26:27.121 Removing: /var/run/dpdk/spdk_pid73323 00:26:27.121 Removing: /var/run/dpdk/spdk_pid73675 00:26:27.121 Removing: /var/run/dpdk/spdk_pid73858 00:26:27.121 Removing: /var/run/dpdk/spdk_pid73950 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74003 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74102 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74127 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74174 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74362 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74570 00:26:27.121 Removing: /var/run/dpdk/spdk_pid74921 00:26:27.121 Removing: /var/run/dpdk/spdk_pid75896 00:26:27.121 Removing: /var/run/dpdk/spdk_pid76243 00:26:27.121 Removing: /var/run/dpdk/spdk_pid76924 00:26:27.121 Removing: /var/run/dpdk/spdk_pid77070 00:26:27.121 Removing: /var/run/dpdk/spdk_pid77164 00:26:27.121 Removing: /var/run/dpdk/spdk_pid77554 00:26:27.121 Removing: /var/run/dpdk/spdk_pid77613 00:26:27.121 Removing: /var/run/dpdk/spdk_pid78154 00:26:27.121 Removing: /var/run/dpdk/spdk_pid78501 00:26:27.121 Removing: /var/run/dpdk/spdk_pid78924 00:26:27.121 Removing: /var/run/dpdk/spdk_pid79062 00:26:27.121 Removing: /var/run/dpdk/spdk_pid79105 00:26:27.121 Removing: /var/run/dpdk/spdk_pid79180 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79230 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79283 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79470 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79539 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79607 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79685 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79719 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79797 00:26:27.379 Removing: /var/run/dpdk/spdk_pid79918 00:26:27.379 Clean 00:26:27.379 09:54:54 -- common/autotest_common.sh@1451 -- # return 0 00:26:27.379 09:54:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:27.379 09:54:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.379 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:26:27.379 09:54:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:27.379 09:54:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.379 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:26:27.379 09:54:54 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:27.379 09:54:54 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:27.379 09:54:54 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:27.379 09:54:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:27.379 09:54:54 -- spdk/autotest.sh@394 -- # hostname 00:26:27.379 09:54:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:27.637 geninfo: WARNING: invalid characters removed from testname! 00:26:54.196 09:55:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:54.765 09:55:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:57.311 09:55:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:59.858 09:55:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:01.269 09:55:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:03.813 09:55:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:05.717 09:55:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:05.717 09:55:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:05.717 09:55:33 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:05.717 09:55:33 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:05.717 09:55:33 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:05.717 09:55:33 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:05.717 + [[ -n 5025 ]] 00:27:05.717 + sudo kill 5025 00:27:05.728 [Pipeline] } 00:27:05.743 [Pipeline] // timeout 00:27:05.749 [Pipeline] } 00:27:05.764 [Pipeline] // stage 00:27:05.769 [Pipeline] } 00:27:05.784 [Pipeline] // catchError 00:27:05.793 [Pipeline] stage 00:27:05.796 [Pipeline] { (Stop VM) 00:27:05.809 [Pipeline] sh 00:27:06.094 + vagrant halt 00:27:08.641 ==> default: Halting domain... 00:27:11.946 [Pipeline] sh 00:27:12.222 + vagrant destroy -f 00:27:14.749 ==> default: Removing domain... 00:27:15.327 [Pipeline] sh 00:27:15.607 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:27:15.615 [Pipeline] } 00:27:15.632 [Pipeline] // stage 00:27:15.637 [Pipeline] } 00:27:15.651 [Pipeline] // dir 00:27:15.657 [Pipeline] } 00:27:15.672 [Pipeline] // wrap 00:27:15.678 [Pipeline] } 00:27:15.690 [Pipeline] // catchError 00:27:15.700 [Pipeline] stage 00:27:15.702 [Pipeline] { (Epilogue) 00:27:15.716 [Pipeline] sh 00:27:15.996 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:21.295 [Pipeline] catchError 00:27:21.297 [Pipeline] { 00:27:21.310 [Pipeline] sh 00:27:21.588 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:21.588 Artifacts sizes are good 00:27:21.596 [Pipeline] } 00:27:21.610 [Pipeline] // catchError 00:27:21.621 [Pipeline] archiveArtifacts 00:27:21.628 Archiving artifacts 00:27:21.753 [Pipeline] cleanWs 00:27:21.765 [WS-CLEANUP] Deleting project workspace... 00:27:21.765 [WS-CLEANUP] Deferred wipeout is used... 00:27:21.771 [WS-CLEANUP] done 00:27:21.773 [Pipeline] } 00:27:21.788 [Pipeline] // stage 00:27:21.793 [Pipeline] } 00:27:21.806 [Pipeline] // node 00:27:21.812 [Pipeline] End of Pipeline 00:27:21.848 Finished: SUCCESS