00:00:00.001 Started by upstream project "autotest-per-patch" build number 132070 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.146 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.147 The recommended git tool is: git 00:00:00.147 using credential 00000000-0000-0000-0000-000000000002 00:00:00.148 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.217 Fetching changes from the remote Git repository 00:00:00.219 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.287 Using shallow fetch with depth 1 00:00:00.287 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.287 > git --version # timeout=10 00:00:00.367 > git --version # 'git version 2.39.2' 00:00:00.367 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.427 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.427 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.502 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.518 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.532 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.532 > git config core.sparsecheckout # timeout=10 00:00:07.545 > git read-tree -mu HEAD # timeout=10 00:00:07.564 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.585 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.585 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.675 [Pipeline] Start of Pipeline 00:00:07.690 [Pipeline] library 00:00:07.691 Loading library shm_lib@master 00:00:07.691 Library shm_lib@master is cached. Copying from home. 00:00:07.704 [Pipeline] node 00:00:22.706 Still waiting to schedule task 00:00:22.706 Waiting for next available executor on ‘vagrant-vm-host’ 00:24:03.391 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_3 00:24:03.392 [Pipeline] { 00:24:03.403 [Pipeline] catchError 00:24:03.405 [Pipeline] { 00:24:03.421 [Pipeline] wrap 00:24:03.431 [Pipeline] { 00:24:03.440 [Pipeline] stage 00:24:03.442 [Pipeline] { (Prologue) 00:24:03.461 [Pipeline] echo 00:24:03.463 Node: VM-host-SM38 00:24:03.469 [Pipeline] cleanWs 00:24:03.530 [WS-CLEANUP] Deleting project workspace... 00:24:03.530 [WS-CLEANUP] Deferred wipeout is used... 00:24:03.536 [WS-CLEANUP] done 00:24:03.750 [Pipeline] setCustomBuildProperty 00:24:03.842 [Pipeline] httpRequest 00:24:04.245 [Pipeline] echo 00:24:04.247 Sorcerer 10.211.164.101 is alive 00:24:04.262 [Pipeline] retry 00:24:04.265 [Pipeline] { 00:24:04.286 [Pipeline] httpRequest 00:24:04.293 HttpMethod: GET 00:24:04.303 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:24:04.304 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:24:04.304 Response Code: HTTP/1.1 200 OK 00:24:04.305 Success: Status code 200 is in the accepted range: 200,404 00:24:04.305 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:24:04.440 [Pipeline] } 00:24:04.456 [Pipeline] // retry 00:24:04.462 [Pipeline] sh 00:24:04.745 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:24:04.760 [Pipeline] httpRequest 00:24:05.154 [Pipeline] echo 00:24:05.156 Sorcerer 10.211.164.101 is alive 00:24:05.166 [Pipeline] retry 00:24:05.168 [Pipeline] { 00:24:05.183 [Pipeline] httpRequest 00:24:05.188 HttpMethod: GET 00:24:05.189 URL: http://10.211.164.101/packages/spdk_eca0d2cd8a3ce1016c3c8f0990814efa4b076545.tar.gz 00:24:05.189 Sending request to url: http://10.211.164.101/packages/spdk_eca0d2cd8a3ce1016c3c8f0990814efa4b076545.tar.gz 00:24:05.190 Response Code: HTTP/1.1 200 OK 00:24:05.190 Success: Status code 200 is in the accepted range: 200,404 00:24:05.191 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_eca0d2cd8a3ce1016c3c8f0990814efa4b076545.tar.gz 00:24:07.466 [Pipeline] } 00:24:07.483 [Pipeline] // retry 00:24:07.491 [Pipeline] sh 00:24:07.768 + tar --no-same-owner -xf spdk_eca0d2cd8a3ce1016c3c8f0990814efa4b076545.tar.gz 00:24:11.060 [Pipeline] sh 00:24:11.341 + git -C spdk log --oneline -n5 00:24:11.341 eca0d2cd8 test/iscsi_tgt: Remove support for the namespace arg 00:24:11.341 190a633b5 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:24:11.341 4c618f461 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:24:11.341 a51629061 test/nvmf: Remove all transport conditions from the test suites 00:24:11.341 9f70a047a test/nvmf: Drop $RDMA_IP_LIST 00:24:11.359 [Pipeline] writeFile 00:24:11.376 [Pipeline] sh 00:24:11.654 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:24:11.665 [Pipeline] sh 00:24:11.943 + cat autorun-spdk.conf 00:24:11.943 SPDK_RUN_FUNCTIONAL_TEST=1 00:24:11.943 SPDK_TEST_NVME=1 00:24:11.943 SPDK_TEST_FTL=1 00:24:11.943 SPDK_TEST_ISAL=1 00:24:11.943 SPDK_RUN_ASAN=1 00:24:11.943 SPDK_RUN_UBSAN=1 00:24:11.943 SPDK_TEST_XNVME=1 00:24:11.943 SPDK_TEST_NVME_FDP=1 00:24:11.943 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:11.949 RUN_NIGHTLY=0 00:24:11.950 [Pipeline] } 00:24:11.966 [Pipeline] // stage 00:24:11.981 [Pipeline] stage 00:24:11.983 [Pipeline] { (Run VM) 00:24:11.997 [Pipeline] sh 00:24:12.275 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:24:12.275 + echo 'Start stage prepare_nvme.sh' 00:24:12.275 Start stage prepare_nvme.sh 00:24:12.275 + [[ -n 8 ]] 00:24:12.275 + disk_prefix=ex8 00:24:12.275 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:24:12.275 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:24:12.275 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:24:12.275 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:12.275 ++ SPDK_TEST_NVME=1 00:24:12.275 ++ SPDK_TEST_FTL=1 00:24:12.275 ++ SPDK_TEST_ISAL=1 00:24:12.275 ++ SPDK_RUN_ASAN=1 00:24:12.275 ++ SPDK_RUN_UBSAN=1 00:24:12.275 ++ SPDK_TEST_XNVME=1 00:24:12.275 ++ SPDK_TEST_NVME_FDP=1 00:24:12.275 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:12.275 ++ RUN_NIGHTLY=0 00:24:12.275 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:24:12.275 + nvme_files=() 00:24:12.275 + declare -A nvme_files 00:24:12.275 + backend_dir=/var/lib/libvirt/images/backends 00:24:12.275 + nvme_files['nvme.img']=5G 00:24:12.275 + nvme_files['nvme-cmb.img']=5G 00:24:12.275 + nvme_files['nvme-multi0.img']=4G 00:24:12.275 + nvme_files['nvme-multi1.img']=4G 00:24:12.275 + nvme_files['nvme-multi2.img']=4G 00:24:12.275 + nvme_files['nvme-openstack.img']=8G 00:24:12.275 + nvme_files['nvme-zns.img']=5G 00:24:12.275 + (( SPDK_TEST_NVME_PMR == 1 )) 00:24:12.275 + (( SPDK_TEST_FTL == 1 )) 00:24:12.275 + nvme_files["nvme-ftl.img"]=6G 00:24:12.275 + (( SPDK_TEST_NVME_FDP == 1 )) 00:24:12.275 + nvme_files["nvme-fdp.img"]=1G 00:24:12.275 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:24:12.275 + for nvme in "${!nvme_files[@]}" 00:24:12.275 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:24:13.208 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:24:13.208 + for nvme in "${!nvme_files[@]}" 00:24:13.208 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:24:15.733 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:24:15.733 + for nvme in "${!nvme_files[@]}" 00:24:15.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:24:15.733 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:24:15.733 + for nvme in "${!nvme_files[@]}" 00:24:15.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:24:15.733 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:24:15.733 + for nvme in "${!nvme_files[@]}" 00:24:15.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:24:15.733 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:24:15.733 + for nvme in "${!nvme_files[@]}" 00:24:15.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:24:15.733 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:24:15.733 + for nvme in "${!nvme_files[@]}" 00:24:15.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:24:15.991 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:24:15.991 + for nvme in "${!nvme_files[@]}" 00:24:15.991 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:24:16.555 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:24:16.555 + for nvme in "${!nvme_files[@]}" 00:24:16.555 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:24:16.555 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:24:16.555 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:24:16.555 + echo 'End stage prepare_nvme.sh' 00:24:16.555 End stage prepare_nvme.sh 00:24:16.566 [Pipeline] sh 00:24:16.843 + DISTRO=fedora39 00:24:16.843 + CPUS=10 00:24:16.843 + RAM=12288 00:24:16.843 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:24:16.843 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:24:16.843 00:24:16.843 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:24:16.843 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:24:16.843 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:24:16.843 HELP=0 00:24:16.843 DRY_RUN=0 00:24:16.843 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:24:16.843 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:24:16.843 NVME_AUTO_CREATE=0 00:24:16.843 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:24:16.843 NVME_CMB=,,,, 00:24:16.843 NVME_PMR=,,,, 00:24:16.843 NVME_ZNS=,,,, 00:24:16.843 NVME_MS=true,,,, 00:24:16.843 NVME_FDP=,,,on, 00:24:16.843 SPDK_VAGRANT_DISTRO=fedora39 00:24:16.843 SPDK_VAGRANT_VMCPU=10 00:24:16.843 SPDK_VAGRANT_VMRAM=12288 00:24:16.843 SPDK_VAGRANT_PROVIDER=libvirt 00:24:16.843 SPDK_VAGRANT_HTTP_PROXY= 00:24:16.843 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:24:16.843 SPDK_OPENSTACK_NETWORK=0 00:24:16.843 VAGRANT_PACKAGE_BOX=0 00:24:16.843 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:24:16.843 FORCE_DISTRO=true 00:24:16.843 VAGRANT_BOX_VERSION= 00:24:16.843 EXTRA_VAGRANTFILES= 00:24:16.843 NIC_MODEL=e1000 00:24:16.843 00:24:16.843 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:24:16.843 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:24:19.370 Bringing machine 'default' up with 'libvirt' provider... 00:24:19.370 ==> default: Creating image (snapshot of base box volume). 00:24:19.936 ==> default: Creating domain with the following settings... 00:24:19.936 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730821780_5aae78251292bf383884 00:24:19.936 ==> default: -- Domain type: kvm 00:24:19.936 ==> default: -- Cpus: 10 00:24:19.936 ==> default: -- Feature: acpi 00:24:19.936 ==> default: -- Feature: apic 00:24:19.936 ==> default: -- Feature: pae 00:24:19.936 ==> default: -- Memory: 12288M 00:24:19.936 ==> default: -- Memory Backing: hugepages: 00:24:19.936 ==> default: -- Management MAC: 00:24:19.936 ==> default: -- Loader: 00:24:19.936 ==> default: -- Nvram: 00:24:19.936 ==> default: -- Base box: spdk/fedora39 00:24:19.936 ==> default: -- Storage pool: default 00:24:19.936 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730821780_5aae78251292bf383884.img (20G) 00:24:19.936 ==> default: -- Volume Cache: default 00:24:19.936 ==> default: -- Kernel: 00:24:19.936 ==> default: -- Initrd: 00:24:19.936 ==> default: -- Graphics Type: vnc 00:24:19.936 ==> default: -- Graphics Port: -1 00:24:19.936 ==> default: -- Graphics IP: 127.0.0.1 00:24:19.936 ==> default: -- Graphics Password: Not defined 00:24:19.936 ==> default: -- Video Type: cirrus 00:24:19.936 ==> default: -- Video VRAM: 9216 00:24:19.936 ==> default: -- Sound Type: 00:24:19.936 ==> default: -- Keymap: en-us 00:24:19.936 ==> default: -- TPM Path: 00:24:19.936 ==> default: -- INPUT: type=mouse, bus=ps2 00:24:19.936 ==> default: -- Command line args: 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:24:19.936 ==> default: -> value=-drive, 00:24:19.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:24:19.936 ==> default: -> value=-drive, 00:24:19.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:24:19.936 ==> default: -> value=-drive, 00:24:19.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:19.936 ==> default: -> value=-drive, 00:24:19.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:19.936 ==> default: -> value=-drive, 00:24:19.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:24:19.936 ==> default: -> value=-drive, 00:24:19.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:24:19.936 ==> default: -> value=-device, 00:24:19.936 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:24:19.936 ==> default: Creating shared folders metadata... 00:24:19.936 ==> default: Starting domain. 00:24:21.309 ==> default: Waiting for domain to get an IP address... 00:24:36.174 ==> default: Waiting for SSH to become available... 00:24:36.174 ==> default: Configuring and enabling network interfaces... 00:24:39.457 default: SSH address: 192.168.121.111:22 00:24:39.457 default: SSH username: vagrant 00:24:39.457 default: SSH auth method: private key 00:24:40.879 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:24:50.948 ==> default: Mounting SSHFS shared folder... 00:24:51.903 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:24:51.903 ==> default: Checking Mount.. 00:24:52.841 ==> default: Folder Successfully Mounted! 00:24:52.841 00:24:52.841 SUCCESS! 00:24:52.841 00:24:52.841 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:24:52.841 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:24:52.841 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:24:52.841 00:24:52.850 [Pipeline] } 00:24:52.864 [Pipeline] // stage 00:24:52.872 [Pipeline] dir 00:24:52.872 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:24:52.874 [Pipeline] { 00:24:52.885 [Pipeline] catchError 00:24:52.887 [Pipeline] { 00:24:52.898 [Pipeline] sh 00:24:53.179 + vagrant ssh-config --host vagrant 00:24:53.179 + sed -ne '/^Host/,$p' 00:24:53.179 + tee ssh_conf 00:24:55.716 Host vagrant 00:24:55.716 HostName 192.168.121.111 00:24:55.716 User vagrant 00:24:55.716 Port 22 00:24:55.716 UserKnownHostsFile /dev/null 00:24:55.716 StrictHostKeyChecking no 00:24:55.716 PasswordAuthentication no 00:24:55.716 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:24:55.716 IdentitiesOnly yes 00:24:55.716 LogLevel FATAL 00:24:55.716 ForwardAgent yes 00:24:55.716 ForwardX11 yes 00:24:55.716 00:24:55.729 [Pipeline] withEnv 00:24:55.732 [Pipeline] { 00:24:55.743 [Pipeline] sh 00:24:56.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:24:56.017 source /etc/os-release 00:24:56.017 [[ -e /image.version ]] && img=$(< /image.version) 00:24:56.017 # Minimal, systemd-like check. 00:24:56.017 if [[ -e /.dockerenv ]]; then 00:24:56.017 # Clear garbage from the node'\''s name: 00:24:56.017 # agt-er_autotest_547-896 -> autotest_547-896 00:24:56.017 # $HOSTNAME is the actual container id 00:24:56.017 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:24:56.017 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:24:56.017 # We can assume this is a mount from a host where container is running, 00:24:56.017 # so fetch its hostname to easily identify the target swarm worker. 00:24:56.017 container="$(< /etc/hostname) ($agent)" 00:24:56.017 else 00:24:56.017 # Fallback 00:24:56.017 container=$agent 00:24:56.017 fi 00:24:56.017 fi 00:24:56.017 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:24:56.017 ' 00:24:56.027 [Pipeline] } 00:24:56.043 [Pipeline] // withEnv 00:24:56.051 [Pipeline] setCustomBuildProperty 00:24:56.066 [Pipeline] stage 00:24:56.069 [Pipeline] { (Tests) 00:24:56.085 [Pipeline] sh 00:24:56.368 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:24:56.647 [Pipeline] sh 00:24:56.933 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:24:57.210 [Pipeline] timeout 00:24:57.210 Timeout set to expire in 50 min 00:24:57.212 [Pipeline] { 00:24:57.226 [Pipeline] sh 00:24:57.507 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:24:58.071 HEAD is now at eca0d2cd8 test/iscsi_tgt: Remove support for the namespace arg 00:24:58.083 [Pipeline] sh 00:24:58.358 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:24:58.628 [Pipeline] sh 00:24:58.903 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:24:58.915 [Pipeline] sh 00:24:59.260 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:24:59.260 ++ readlink -f spdk_repo 00:24:59.260 + DIR_ROOT=/home/vagrant/spdk_repo 00:24:59.260 + [[ -n /home/vagrant/spdk_repo ]] 00:24:59.260 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:24:59.260 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:24:59.260 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:24:59.260 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:24:59.260 + [[ -d /home/vagrant/spdk_repo/output ]] 00:24:59.260 + [[ nvme-vg-autotest == pkgdep-* ]] 00:24:59.260 + cd /home/vagrant/spdk_repo 00:24:59.260 + source /etc/os-release 00:24:59.260 ++ NAME='Fedora Linux' 00:24:59.260 ++ VERSION='39 (Cloud Edition)' 00:24:59.260 ++ ID=fedora 00:24:59.260 ++ VERSION_ID=39 00:24:59.260 ++ VERSION_CODENAME= 00:24:59.260 ++ PLATFORM_ID=platform:f39 00:24:59.260 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:24:59.260 ++ ANSI_COLOR='0;38;2;60;110;180' 00:24:59.260 ++ LOGO=fedora-logo-icon 00:24:59.260 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:24:59.260 ++ HOME_URL=https://fedoraproject.org/ 00:24:59.260 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:24:59.260 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:24:59.260 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:24:59.260 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:24:59.260 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:24:59.260 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:24:59.260 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:24:59.260 ++ SUPPORT_END=2024-11-12 00:24:59.260 ++ VARIANT='Cloud Edition' 00:24:59.260 ++ VARIANT_ID=cloud 00:24:59.260 + uname -a 00:24:59.260 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:24:59.260 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:24:59.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:59.777 Hugepages 00:24:59.777 node hugesize free / total 00:24:59.777 node0 1048576kB 0 / 0 00:24:59.777 node0 2048kB 0 / 0 00:24:59.777 00:24:59.777 Type BDF Vendor Device NUMA Driver Device Block devices 00:24:59.777 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:24:59.777 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:24:59.777 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:24:59.777 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:24:59.777 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:25:00.034 + rm -f /tmp/spdk-ld-path 00:25:00.034 + source autorun-spdk.conf 00:25:00.034 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:00.034 ++ SPDK_TEST_NVME=1 00:25:00.034 ++ SPDK_TEST_FTL=1 00:25:00.035 ++ SPDK_TEST_ISAL=1 00:25:00.035 ++ SPDK_RUN_ASAN=1 00:25:00.035 ++ SPDK_RUN_UBSAN=1 00:25:00.035 ++ SPDK_TEST_XNVME=1 00:25:00.035 ++ SPDK_TEST_NVME_FDP=1 00:25:00.035 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:00.035 ++ RUN_NIGHTLY=0 00:25:00.035 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:25:00.035 + [[ -n '' ]] 00:25:00.035 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:25:00.035 + for M in /var/spdk/build-*-manifest.txt 00:25:00.035 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:25:00.035 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:25:00.035 + for M in /var/spdk/build-*-manifest.txt 00:25:00.035 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:25:00.035 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:25:00.035 + for M in /var/spdk/build-*-manifest.txt 00:25:00.035 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:25:00.035 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:25:00.035 ++ uname 00:25:00.035 + [[ Linux == \L\i\n\u\x ]] 00:25:00.035 + sudo dmesg -T 00:25:00.035 + sudo dmesg --clear 00:25:00.035 + dmesg_pid=5039 00:25:00.035 + sudo dmesg -Tw 00:25:00.035 + [[ Fedora Linux == FreeBSD ]] 00:25:00.035 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:00.035 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:25:00.035 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:25:00.035 + [[ -x /usr/src/fio-static/fio ]] 00:25:00.035 + export FIO_BIN=/usr/src/fio-static/fio 00:25:00.035 + FIO_BIN=/usr/src/fio-static/fio 00:25:00.035 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:25:00.035 + [[ ! -v VFIO_QEMU_BIN ]] 00:25:00.035 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:25:00.035 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:25:00.035 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:25:00.035 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:25:00.035 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:25:00.035 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:25:00.035 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:00.292 15:50:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:25:00.292 15:50:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:00.292 15:50:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:00.292 15:50:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:25:00.292 15:50:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:00.293 15:50:21 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:25:00.293 15:50:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:25:00.293 15:50:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:25:00.293 15:50:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:25:00.293 15:50:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.293 15:50:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:00.293 15:50:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:00.293 15:50:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.293 15:50:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.293 15:50:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.293 15:50:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.293 15:50:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.293 15:50:21 -- paths/export.sh@5 -- $ export PATH 00:25:00.293 15:50:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.293 15:50:21 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:00.293 15:50:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:25:00.293 15:50:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730821821.XXXXXX 00:25:00.293 15:50:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730821821.QmbPGV 00:25:00.293 15:50:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:25:00.293 15:50:21 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:25:00.293 15:50:21 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:00.293 15:50:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:00.293 15:50:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:00.293 15:50:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:25:00.293 15:50:21 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:00.293 15:50:21 -- common/autotest_common.sh@10 -- $ set +x 00:25:00.293 15:50:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:25:00.293 15:50:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:25:00.293 15:50:21 -- pm/common@17 -- $ local monitor 00:25:00.293 15:50:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.293 15:50:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.293 15:50:21 -- pm/common@25 -- $ sleep 1 00:25:00.293 15:50:21 -- pm/common@21 -- $ date +%s 00:25:00.293 15:50:21 -- pm/common@21 -- $ date +%s 00:25:00.293 15:50:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730821821 00:25:00.293 15:50:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730821821 00:25:00.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730821821_collect-cpu-load.pm.log 00:25:00.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730821821_collect-vmstat.pm.log 00:25:01.226 15:50:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:25:01.226 15:50:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:25:01.226 15:50:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:25:01.226 15:50:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:01.226 15:50:22 -- spdk/autobuild.sh@16 -- $ date -u 00:25:01.226 Tue Nov 5 03:50:22 PM UTC 2024 00:25:01.226 15:50:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:25:01.226 v25.01-pre-166-geca0d2cd8 00:25:01.226 15:50:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:25:01.226 15:50:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:25:01.226 15:50:22 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:25:01.226 15:50:22 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:25:01.226 15:50:22 -- common/autotest_common.sh@10 -- $ set +x 00:25:01.226 ************************************ 00:25:01.226 START TEST asan 00:25:01.226 ************************************ 00:25:01.226 using asan 00:25:01.226 15:50:22 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:25:01.226 00:25:01.226 real 0m0.000s 00:25:01.226 user 0m0.000s 00:25:01.226 sys 0m0.000s 00:25:01.226 15:50:22 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:25:01.226 ************************************ 00:25:01.226 END TEST asan 00:25:01.226 15:50:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:25:01.226 ************************************ 00:25:01.226 15:50:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:25:01.226 15:50:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:25:01.226 15:50:22 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:25:01.226 15:50:22 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:25:01.226 15:50:22 -- common/autotest_common.sh@10 -- $ set +x 00:25:01.226 ************************************ 00:25:01.226 START TEST ubsan 00:25:01.226 ************************************ 00:25:01.226 using ubsan 00:25:01.226 ************************************ 00:25:01.226 END TEST ubsan 00:25:01.226 ************************************ 00:25:01.226 15:50:22 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:25:01.226 00:25:01.226 real 0m0.000s 00:25:01.226 user 0m0.000s 00:25:01.226 sys 0m0.000s 00:25:01.226 15:50:22 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:25:01.226 15:50:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:25:01.226 15:50:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:25:01.226 15:50:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:25:01.226 15:50:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:25:01.226 15:50:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:25:01.226 15:50:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:25:01.226 15:50:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:25:01.226 15:50:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:25:01.226 15:50:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:25:01.226 15:50:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:25:01.496 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:25:01.496 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:01.753 Using 'verbs' RDMA provider 00:25:12.648 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:25:22.638 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:25:22.638 Creating mk/config.mk...done. 00:25:22.638 Creating mk/cc.flags.mk...done. 00:25:22.638 Type 'make' to build. 00:25:22.638 15:50:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:25:22.638 15:50:43 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:25:22.638 15:50:43 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:25:22.638 15:50:43 -- common/autotest_common.sh@10 -- $ set +x 00:25:22.638 ************************************ 00:25:22.638 START TEST make 00:25:22.638 ************************************ 00:25:22.638 15:50:43 make -- common/autotest_common.sh@1127 -- $ make -j10 00:25:22.638 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:25:22.638 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:25:22.638 meson setup builddir \ 00:25:22.638 -Dwith-libaio=enabled \ 00:25:22.638 -Dwith-liburing=enabled \ 00:25:22.638 -Dwith-libvfn=disabled \ 00:25:22.638 -Dwith-spdk=disabled \ 00:25:22.638 -Dexamples=false \ 00:25:22.638 -Dtests=false \ 00:25:22.638 -Dtools=false && \ 00:25:22.638 meson compile -C builddir && \ 00:25:22.638 cd -) 00:25:22.638 make[1]: Nothing to be done for 'all'. 00:25:25.916 The Meson build system 00:25:25.916 Version: 1.5.0 00:25:25.916 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:25:25.916 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:25:25.916 Build type: native build 00:25:25.916 Project name: xnvme 00:25:25.916 Project version: 0.7.5 00:25:25.916 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:25:25.916 C linker for the host machine: cc ld.bfd 2.40-14 00:25:25.916 Host machine cpu family: x86_64 00:25:25.916 Host machine cpu: x86_64 00:25:25.916 Message: host_machine.system: linux 00:25:25.916 Compiler for C supports arguments -Wno-missing-braces: YES 00:25:25.916 Compiler for C supports arguments -Wno-cast-function-type: YES 00:25:25.916 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:25:25.916 Run-time dependency threads found: YES 00:25:25.916 Has header "setupapi.h" : NO 00:25:25.916 Has header "linux/blkzoned.h" : YES 00:25:25.916 Has header "linux/blkzoned.h" : YES (cached) 00:25:25.916 Has header "libaio.h" : YES 00:25:25.916 Library aio found: YES 00:25:25.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:25:25.916 Run-time dependency liburing found: YES 2.2 00:25:25.916 Dependency libvfn skipped: feature with-libvfn disabled 00:25:25.916 Found CMake: /usr/bin/cmake (3.27.7) 00:25:25.916 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:25:25.916 Subproject spdk : skipped: feature with-spdk disabled 00:25:25.916 Run-time dependency appleframeworks found: NO (tried framework) 00:25:25.916 Run-time dependency appleframeworks found: NO (tried framework) 00:25:25.916 Library rt found: YES 00:25:25.916 Checking for function "clock_gettime" with dependency -lrt: YES 00:25:25.916 Configuring xnvme_config.h using configuration 00:25:25.916 Configuring xnvme.spec using configuration 00:25:25.916 Run-time dependency bash-completion found: YES 2.11 00:25:25.916 Message: Bash-completions: /usr/share/bash-completion/completions 00:25:25.916 Program cp found: YES (/usr/bin/cp) 00:25:25.916 Build targets in project: 3 00:25:25.916 00:25:25.916 xnvme 0.7.5 00:25:25.916 00:25:25.916 Subprojects 00:25:25.916 spdk : NO Feature 'with-spdk' disabled 00:25:25.916 00:25:25.916 User defined options 00:25:25.916 examples : false 00:25:25.916 tests : false 00:25:25.916 tools : false 00:25:25.916 with-libaio : enabled 00:25:25.916 with-liburing: enabled 00:25:25.916 with-libvfn : disabled 00:25:25.916 with-spdk : disabled 00:25:25.916 00:25:25.916 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:25.916 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:25:25.916 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:25:26.174 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:25:26.174 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:25:26.174 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:25:26.174 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:25:26.174 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:25:26.174 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:25:26.174 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:25:26.174 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:25:26.174 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:25:26.174 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:25:26.174 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:25:26.174 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:25:26.174 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:25:26.174 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:25:26.174 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:25:26.174 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:25:26.174 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:25:26.174 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:25:26.174 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:25:26.174 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:25:26.174 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:25:26.174 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:25:26.174 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:25:26.174 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:25:26.174 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:25:26.174 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:25:26.174 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:25:26.432 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:25:26.432 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:25:26.432 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:25:26.432 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:25:26.432 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:25:26.432 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:25:26.432 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:25:26.432 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:25:26.432 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:25:26.432 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:25:26.432 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:25:26.432 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:25:26.432 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:25:26.432 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:25:26.432 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:25:26.432 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:25:26.432 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:25:26.432 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:25:26.432 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:25:26.432 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:25:26.432 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:25:26.432 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:25:26.432 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:25:26.432 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:25:26.432 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:25:26.432 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:25:26.432 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:25:26.432 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:25:26.432 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:25:26.432 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:25:26.432 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:25:26.432 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:25:26.433 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:25:26.690 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:25:26.690 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:25:26.690 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:25:26.690 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:25:26.690 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:25:26.690 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:25:26.690 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:25:26.690 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:25:26.690 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:25:26.690 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:25:26.690 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:25:26.690 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:25:27.255 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:25:27.255 [75/76] Linking static target lib/libxnvme.a 00:25:27.255 [76/76] Linking target lib/libxnvme.so.0.7.5 00:25:27.255 INFO: autodetecting backend as ninja 00:25:27.255 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:25:27.255 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:25:33.882 The Meson build system 00:25:33.882 Version: 1.5.0 00:25:33.882 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:25:33.882 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:25:33.882 Build type: native build 00:25:33.882 Program cat found: YES (/usr/bin/cat) 00:25:33.882 Project name: DPDK 00:25:33.882 Project version: 24.03.0 00:25:33.882 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:25:33.882 C linker for the host machine: cc ld.bfd 2.40-14 00:25:33.882 Host machine cpu family: x86_64 00:25:33.882 Host machine cpu: x86_64 00:25:33.882 Message: ## Building in Developer Mode ## 00:25:33.882 Program pkg-config found: YES (/usr/bin/pkg-config) 00:25:33.882 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:25:33.882 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:25:33.882 Program python3 found: YES (/usr/bin/python3) 00:25:33.882 Program cat found: YES (/usr/bin/cat) 00:25:33.882 Compiler for C supports arguments -march=native: YES 00:25:33.882 Checking for size of "void *" : 8 00:25:33.882 Checking for size of "void *" : 8 (cached) 00:25:33.882 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:25:33.882 Library m found: YES 00:25:33.882 Library numa found: YES 00:25:33.882 Has header "numaif.h" : YES 00:25:33.882 Library fdt found: NO 00:25:33.882 Library execinfo found: NO 00:25:33.882 Has header "execinfo.h" : YES 00:25:33.882 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:25:33.882 Run-time dependency libarchive found: NO (tried pkgconfig) 00:25:33.882 Run-time dependency libbsd found: NO (tried pkgconfig) 00:25:33.882 Run-time dependency jansson found: NO (tried pkgconfig) 00:25:33.882 Run-time dependency openssl found: YES 3.1.1 00:25:33.882 Run-time dependency libpcap found: YES 1.10.4 00:25:33.882 Has header "pcap.h" with dependency libpcap: YES 00:25:33.882 Compiler for C supports arguments -Wcast-qual: YES 00:25:33.882 Compiler for C supports arguments -Wdeprecated: YES 00:25:33.882 Compiler for C supports arguments -Wformat: YES 00:25:33.882 Compiler for C supports arguments -Wformat-nonliteral: NO 00:25:33.882 Compiler for C supports arguments -Wformat-security: NO 00:25:33.882 Compiler for C supports arguments -Wmissing-declarations: YES 00:25:33.882 Compiler for C supports arguments -Wmissing-prototypes: YES 00:25:33.882 Compiler for C supports arguments -Wnested-externs: YES 00:25:33.882 Compiler for C supports arguments -Wold-style-definition: YES 00:25:33.882 Compiler for C supports arguments -Wpointer-arith: YES 00:25:33.882 Compiler for C supports arguments -Wsign-compare: YES 00:25:33.882 Compiler for C supports arguments -Wstrict-prototypes: YES 00:25:33.882 Compiler for C supports arguments -Wundef: YES 00:25:33.882 Compiler for C supports arguments -Wwrite-strings: YES 00:25:33.882 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:25:33.882 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:25:33.882 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:25:33.882 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:25:33.882 Program objdump found: YES (/usr/bin/objdump) 00:25:33.882 Compiler for C supports arguments -mavx512f: YES 00:25:33.882 Checking if "AVX512 checking" compiles: YES 00:25:33.882 Fetching value of define "__SSE4_2__" : 1 00:25:33.882 Fetching value of define "__AES__" : 1 00:25:33.882 Fetching value of define "__AVX__" : 1 00:25:33.882 Fetching value of define "__AVX2__" : 1 00:25:33.882 Fetching value of define "__AVX512BW__" : 1 00:25:33.882 Fetching value of define "__AVX512CD__" : 1 00:25:33.882 Fetching value of define "__AVX512DQ__" : 1 00:25:33.882 Fetching value of define "__AVX512F__" : 1 00:25:33.882 Fetching value of define "__AVX512VL__" : 1 00:25:33.882 Fetching value of define "__PCLMUL__" : 1 00:25:33.882 Fetching value of define "__RDRND__" : 1 00:25:33.882 Fetching value of define "__RDSEED__" : 1 00:25:33.882 Fetching value of define "__VPCLMULQDQ__" : 1 00:25:33.882 Fetching value of define "__znver1__" : (undefined) 00:25:33.882 Fetching value of define "__znver2__" : (undefined) 00:25:33.882 Fetching value of define "__znver3__" : (undefined) 00:25:33.882 Fetching value of define "__znver4__" : (undefined) 00:25:33.882 Library asan found: YES 00:25:33.882 Compiler for C supports arguments -Wno-format-truncation: YES 00:25:33.882 Message: lib/log: Defining dependency "log" 00:25:33.882 Message: lib/kvargs: Defining dependency "kvargs" 00:25:33.882 Message: lib/telemetry: Defining dependency "telemetry" 00:25:33.882 Library rt found: YES 00:25:33.882 Checking for function "getentropy" : NO 00:25:33.882 Message: lib/eal: Defining dependency "eal" 00:25:33.882 Message: lib/ring: Defining dependency "ring" 00:25:33.882 Message: lib/rcu: Defining dependency "rcu" 00:25:33.882 Message: lib/mempool: Defining dependency "mempool" 00:25:33.882 Message: lib/mbuf: Defining dependency "mbuf" 00:25:33.882 Fetching value of define "__PCLMUL__" : 1 (cached) 00:25:33.882 Fetching value of define "__AVX512F__" : 1 (cached) 00:25:33.882 Fetching value of define "__AVX512BW__" : 1 (cached) 00:25:33.882 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:25:33.882 Fetching value of define "__AVX512VL__" : 1 (cached) 00:25:33.882 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:25:33.882 Compiler for C supports arguments -mpclmul: YES 00:25:33.882 Compiler for C supports arguments -maes: YES 00:25:33.882 Compiler for C supports arguments -mavx512f: YES (cached) 00:25:33.882 Compiler for C supports arguments -mavx512bw: YES 00:25:33.882 Compiler for C supports arguments -mavx512dq: YES 00:25:33.882 Compiler for C supports arguments -mavx512vl: YES 00:25:33.882 Compiler for C supports arguments -mvpclmulqdq: YES 00:25:33.882 Compiler for C supports arguments -mavx2: YES 00:25:33.882 Compiler for C supports arguments -mavx: YES 00:25:33.882 Message: lib/net: Defining dependency "net" 00:25:33.882 Message: lib/meter: Defining dependency "meter" 00:25:33.882 Message: lib/ethdev: Defining dependency "ethdev" 00:25:33.882 Message: lib/pci: Defining dependency "pci" 00:25:33.882 Message: lib/cmdline: Defining dependency "cmdline" 00:25:33.882 Message: lib/hash: Defining dependency "hash" 00:25:33.882 Message: lib/timer: Defining dependency "timer" 00:25:33.882 Message: lib/compressdev: Defining dependency "compressdev" 00:25:33.882 Message: lib/cryptodev: Defining dependency "cryptodev" 00:25:33.882 Message: lib/dmadev: Defining dependency "dmadev" 00:25:33.882 Compiler for C supports arguments -Wno-cast-qual: YES 00:25:33.882 Message: lib/power: Defining dependency "power" 00:25:33.882 Message: lib/reorder: Defining dependency "reorder" 00:25:33.882 Message: lib/security: Defining dependency "security" 00:25:33.882 Has header "linux/userfaultfd.h" : YES 00:25:33.882 Has header "linux/vduse.h" : YES 00:25:33.882 Message: lib/vhost: Defining dependency "vhost" 00:25:33.882 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:25:33.882 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:25:33.882 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:25:33.882 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:25:33.882 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:25:33.882 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:25:33.882 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:25:33.882 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:25:33.882 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:25:33.882 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:25:33.882 Program doxygen found: YES (/usr/local/bin/doxygen) 00:25:33.882 Configuring doxy-api-html.conf using configuration 00:25:33.882 Configuring doxy-api-man.conf using configuration 00:25:33.882 Program mandb found: YES (/usr/bin/mandb) 00:25:33.882 Program sphinx-build found: NO 00:25:33.882 Configuring rte_build_config.h using configuration 00:25:33.882 Message: 00:25:33.882 ================= 00:25:33.882 Applications Enabled 00:25:33.882 ================= 00:25:33.882 00:25:33.882 apps: 00:25:33.882 00:25:33.882 00:25:33.882 Message: 00:25:33.882 ================= 00:25:33.882 Libraries Enabled 00:25:33.882 ================= 00:25:33.882 00:25:33.882 libs: 00:25:33.882 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:25:33.882 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:25:33.882 cryptodev, dmadev, power, reorder, security, vhost, 00:25:33.882 00:25:33.882 Message: 00:25:33.882 =============== 00:25:33.882 Drivers Enabled 00:25:33.882 =============== 00:25:33.882 00:25:33.882 common: 00:25:33.882 00:25:33.882 bus: 00:25:33.882 pci, vdev, 00:25:33.882 mempool: 00:25:33.882 ring, 00:25:33.882 dma: 00:25:33.882 00:25:33.882 net: 00:25:33.882 00:25:33.882 crypto: 00:25:33.882 00:25:33.882 compress: 00:25:33.882 00:25:33.882 vdpa: 00:25:33.882 00:25:33.882 00:25:33.882 Message: 00:25:33.882 ================= 00:25:33.882 Content Skipped 00:25:33.882 ================= 00:25:33.882 00:25:33.882 apps: 00:25:33.882 dumpcap: explicitly disabled via build config 00:25:33.882 graph: explicitly disabled via build config 00:25:33.882 pdump: explicitly disabled via build config 00:25:33.882 proc-info: explicitly disabled via build config 00:25:33.882 test-acl: explicitly disabled via build config 00:25:33.882 test-bbdev: explicitly disabled via build config 00:25:33.882 test-cmdline: explicitly disabled via build config 00:25:33.882 test-compress-perf: explicitly disabled via build config 00:25:33.882 test-crypto-perf: explicitly disabled via build config 00:25:33.882 test-dma-perf: explicitly disabled via build config 00:25:33.882 test-eventdev: explicitly disabled via build config 00:25:33.882 test-fib: explicitly disabled via build config 00:25:33.882 test-flow-perf: explicitly disabled via build config 00:25:33.883 test-gpudev: explicitly disabled via build config 00:25:33.883 test-mldev: explicitly disabled via build config 00:25:33.883 test-pipeline: explicitly disabled via build config 00:25:33.883 test-pmd: explicitly disabled via build config 00:25:33.883 test-regex: explicitly disabled via build config 00:25:33.883 test-sad: explicitly disabled via build config 00:25:33.883 test-security-perf: explicitly disabled via build config 00:25:33.883 00:25:33.883 libs: 00:25:33.883 argparse: explicitly disabled via build config 00:25:33.883 metrics: explicitly disabled via build config 00:25:33.883 acl: explicitly disabled via build config 00:25:33.883 bbdev: explicitly disabled via build config 00:25:33.883 bitratestats: explicitly disabled via build config 00:25:33.883 bpf: explicitly disabled via build config 00:25:33.883 cfgfile: explicitly disabled via build config 00:25:33.883 distributor: explicitly disabled via build config 00:25:33.883 efd: explicitly disabled via build config 00:25:33.883 eventdev: explicitly disabled via build config 00:25:33.883 dispatcher: explicitly disabled via build config 00:25:33.883 gpudev: explicitly disabled via build config 00:25:33.883 gro: explicitly disabled via build config 00:25:33.883 gso: explicitly disabled via build config 00:25:33.883 ip_frag: explicitly disabled via build config 00:25:33.883 jobstats: explicitly disabled via build config 00:25:33.883 latencystats: explicitly disabled via build config 00:25:33.883 lpm: explicitly disabled via build config 00:25:33.883 member: explicitly disabled via build config 00:25:33.883 pcapng: explicitly disabled via build config 00:25:33.883 rawdev: explicitly disabled via build config 00:25:33.883 regexdev: explicitly disabled via build config 00:25:33.883 mldev: explicitly disabled via build config 00:25:33.883 rib: explicitly disabled via build config 00:25:33.883 sched: explicitly disabled via build config 00:25:33.883 stack: explicitly disabled via build config 00:25:33.883 ipsec: explicitly disabled via build config 00:25:33.883 pdcp: explicitly disabled via build config 00:25:33.883 fib: explicitly disabled via build config 00:25:33.883 port: explicitly disabled via build config 00:25:33.883 pdump: explicitly disabled via build config 00:25:33.883 table: explicitly disabled via build config 00:25:33.883 pipeline: explicitly disabled via build config 00:25:33.883 graph: explicitly disabled via build config 00:25:33.883 node: explicitly disabled via build config 00:25:33.883 00:25:33.883 drivers: 00:25:33.883 common/cpt: not in enabled drivers build config 00:25:33.883 common/dpaax: not in enabled drivers build config 00:25:33.883 common/iavf: not in enabled drivers build config 00:25:33.883 common/idpf: not in enabled drivers build config 00:25:33.883 common/ionic: not in enabled drivers build config 00:25:33.883 common/mvep: not in enabled drivers build config 00:25:33.883 common/octeontx: not in enabled drivers build config 00:25:33.883 bus/auxiliary: not in enabled drivers build config 00:25:33.883 bus/cdx: not in enabled drivers build config 00:25:33.883 bus/dpaa: not in enabled drivers build config 00:25:33.883 bus/fslmc: not in enabled drivers build config 00:25:33.883 bus/ifpga: not in enabled drivers build config 00:25:33.883 bus/platform: not in enabled drivers build config 00:25:33.883 bus/uacce: not in enabled drivers build config 00:25:33.883 bus/vmbus: not in enabled drivers build config 00:25:33.883 common/cnxk: not in enabled drivers build config 00:25:33.883 common/mlx5: not in enabled drivers build config 00:25:33.883 common/nfp: not in enabled drivers build config 00:25:33.883 common/nitrox: not in enabled drivers build config 00:25:33.883 common/qat: not in enabled drivers build config 00:25:33.883 common/sfc_efx: not in enabled drivers build config 00:25:33.883 mempool/bucket: not in enabled drivers build config 00:25:33.883 mempool/cnxk: not in enabled drivers build config 00:25:33.883 mempool/dpaa: not in enabled drivers build config 00:25:33.883 mempool/dpaa2: not in enabled drivers build config 00:25:33.883 mempool/octeontx: not in enabled drivers build config 00:25:33.883 mempool/stack: not in enabled drivers build config 00:25:33.883 dma/cnxk: not in enabled drivers build config 00:25:33.883 dma/dpaa: not in enabled drivers build config 00:25:33.883 dma/dpaa2: not in enabled drivers build config 00:25:33.883 dma/hisilicon: not in enabled drivers build config 00:25:33.883 dma/idxd: not in enabled drivers build config 00:25:33.883 dma/ioat: not in enabled drivers build config 00:25:33.883 dma/skeleton: not in enabled drivers build config 00:25:33.883 net/af_packet: not in enabled drivers build config 00:25:33.883 net/af_xdp: not in enabled drivers build config 00:25:33.883 net/ark: not in enabled drivers build config 00:25:33.883 net/atlantic: not in enabled drivers build config 00:25:33.883 net/avp: not in enabled drivers build config 00:25:33.883 net/axgbe: not in enabled drivers build config 00:25:33.883 net/bnx2x: not in enabled drivers build config 00:25:33.883 net/bnxt: not in enabled drivers build config 00:25:33.883 net/bonding: not in enabled drivers build config 00:25:33.883 net/cnxk: not in enabled drivers build config 00:25:33.883 net/cpfl: not in enabled drivers build config 00:25:33.883 net/cxgbe: not in enabled drivers build config 00:25:33.883 net/dpaa: not in enabled drivers build config 00:25:33.883 net/dpaa2: not in enabled drivers build config 00:25:33.883 net/e1000: not in enabled drivers build config 00:25:33.883 net/ena: not in enabled drivers build config 00:25:33.883 net/enetc: not in enabled drivers build config 00:25:33.883 net/enetfec: not in enabled drivers build config 00:25:33.883 net/enic: not in enabled drivers build config 00:25:33.883 net/failsafe: not in enabled drivers build config 00:25:33.883 net/fm10k: not in enabled drivers build config 00:25:33.883 net/gve: not in enabled drivers build config 00:25:33.883 net/hinic: not in enabled drivers build config 00:25:33.883 net/hns3: not in enabled drivers build config 00:25:33.883 net/i40e: not in enabled drivers build config 00:25:33.883 net/iavf: not in enabled drivers build config 00:25:33.883 net/ice: not in enabled drivers build config 00:25:33.883 net/idpf: not in enabled drivers build config 00:25:33.883 net/igc: not in enabled drivers build config 00:25:33.883 net/ionic: not in enabled drivers build config 00:25:33.883 net/ipn3ke: not in enabled drivers build config 00:25:33.883 net/ixgbe: not in enabled drivers build config 00:25:33.883 net/mana: not in enabled drivers build config 00:25:33.883 net/memif: not in enabled drivers build config 00:25:33.883 net/mlx4: not in enabled drivers build config 00:25:33.883 net/mlx5: not in enabled drivers build config 00:25:33.883 net/mvneta: not in enabled drivers build config 00:25:33.883 net/mvpp2: not in enabled drivers build config 00:25:33.883 net/netvsc: not in enabled drivers build config 00:25:33.883 net/nfb: not in enabled drivers build config 00:25:33.883 net/nfp: not in enabled drivers build config 00:25:33.883 net/ngbe: not in enabled drivers build config 00:25:33.883 net/null: not in enabled drivers build config 00:25:33.883 net/octeontx: not in enabled drivers build config 00:25:33.883 net/octeon_ep: not in enabled drivers build config 00:25:33.883 net/pcap: not in enabled drivers build config 00:25:33.883 net/pfe: not in enabled drivers build config 00:25:33.883 net/qede: not in enabled drivers build config 00:25:33.883 net/ring: not in enabled drivers build config 00:25:33.883 net/sfc: not in enabled drivers build config 00:25:33.883 net/softnic: not in enabled drivers build config 00:25:33.883 net/tap: not in enabled drivers build config 00:25:33.883 net/thunderx: not in enabled drivers build config 00:25:33.883 net/txgbe: not in enabled drivers build config 00:25:33.883 net/vdev_netvsc: not in enabled drivers build config 00:25:33.883 net/vhost: not in enabled drivers build config 00:25:33.883 net/virtio: not in enabled drivers build config 00:25:33.883 net/vmxnet3: not in enabled drivers build config 00:25:33.883 raw/*: missing internal dependency, "rawdev" 00:25:33.883 crypto/armv8: not in enabled drivers build config 00:25:33.883 crypto/bcmfs: not in enabled drivers build config 00:25:33.883 crypto/caam_jr: not in enabled drivers build config 00:25:33.883 crypto/ccp: not in enabled drivers build config 00:25:33.883 crypto/cnxk: not in enabled drivers build config 00:25:33.883 crypto/dpaa_sec: not in enabled drivers build config 00:25:33.883 crypto/dpaa2_sec: not in enabled drivers build config 00:25:33.883 crypto/ipsec_mb: not in enabled drivers build config 00:25:33.883 crypto/mlx5: not in enabled drivers build config 00:25:33.883 crypto/mvsam: not in enabled drivers build config 00:25:33.883 crypto/nitrox: not in enabled drivers build config 00:25:33.883 crypto/null: not in enabled drivers build config 00:25:33.883 crypto/octeontx: not in enabled drivers build config 00:25:33.883 crypto/openssl: not in enabled drivers build config 00:25:33.883 crypto/scheduler: not in enabled drivers build config 00:25:33.883 crypto/uadk: not in enabled drivers build config 00:25:33.883 crypto/virtio: not in enabled drivers build config 00:25:33.883 compress/isal: not in enabled drivers build config 00:25:33.883 compress/mlx5: not in enabled drivers build config 00:25:33.883 compress/nitrox: not in enabled drivers build config 00:25:33.883 compress/octeontx: not in enabled drivers build config 00:25:33.883 compress/zlib: not in enabled drivers build config 00:25:33.883 regex/*: missing internal dependency, "regexdev" 00:25:33.883 ml/*: missing internal dependency, "mldev" 00:25:33.883 vdpa/ifc: not in enabled drivers build config 00:25:33.883 vdpa/mlx5: not in enabled drivers build config 00:25:33.883 vdpa/nfp: not in enabled drivers build config 00:25:33.883 vdpa/sfc: not in enabled drivers build config 00:25:33.883 event/*: missing internal dependency, "eventdev" 00:25:33.883 baseband/*: missing internal dependency, "bbdev" 00:25:33.883 gpu/*: missing internal dependency, "gpudev" 00:25:33.883 00:25:33.883 00:25:33.883 Build targets in project: 84 00:25:33.883 00:25:33.883 DPDK 24.03.0 00:25:33.883 00:25:33.883 User defined options 00:25:33.883 buildtype : debug 00:25:33.883 default_library : shared 00:25:33.883 libdir : lib 00:25:33.883 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:25:33.883 b_sanitize : address 00:25:33.883 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:25:33.883 c_link_args : 00:25:33.883 cpu_instruction_set: native 00:25:33.884 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:25:33.884 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:25:33.884 enable_docs : false 00:25:33.884 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:25:33.884 enable_kmods : false 00:25:33.884 max_lcores : 128 00:25:33.884 tests : false 00:25:33.884 00:25:33.884 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:33.884 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:25:33.884 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:25:33.884 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:25:33.884 [3/267] Linking static target lib/librte_kvargs.a 00:25:33.884 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:25:33.884 [5/267] Linking static target lib/librte_log.a 00:25:33.884 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:25:33.884 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:25:33.884 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:25:34.142 [9/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:25:34.142 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:25:34.142 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:25:34.142 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:25:34.142 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:25:34.142 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:25:34.142 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:25:34.142 [16/267] Linking static target lib/librte_telemetry.a 00:25:34.142 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:25:34.142 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:25:34.400 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:25:34.400 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:25:34.400 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:25:34.400 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:25:34.400 [23/267] Linking target lib/librte_log.so.24.1 00:25:34.657 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:25:34.657 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:25:34.657 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:25:34.657 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:25:34.657 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:25:34.657 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:25:34.657 [30/267] Linking target lib/librte_kvargs.so.24.1 00:25:34.657 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:25:34.915 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:25:34.915 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:25:34.915 [34/267] Linking target lib/librte_telemetry.so.24.1 00:25:34.915 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:25:34.915 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:25:34.915 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:25:34.915 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:25:34.915 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:25:34.915 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:25:35.180 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:25:35.180 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:25:35.180 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:25:35.180 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:25:35.180 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:25:35.180 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:25:35.437 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:25:35.437 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:25:35.437 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:25:35.437 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:25:35.437 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:25:35.693 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:25:35.693 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:25:35.693 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:25:35.693 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:25:35.693 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:25:35.693 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:25:35.950 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:25:35.950 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:25:35.950 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:25:35.950 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:25:35.950 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:25:35.950 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:25:35.950 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:25:35.950 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:25:35.950 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:25:36.208 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:25:36.208 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:25:36.466 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:25:36.466 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:25:36.466 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:25:36.466 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:25:36.466 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:25:36.466 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:25:36.466 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:25:36.466 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:25:36.466 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:25:36.466 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:25:36.724 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:25:36.724 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:25:36.724 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:25:36.724 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:25:36.980 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:25:36.980 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:25:36.980 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:25:36.980 [86/267] Linking static target lib/librte_ring.a 00:25:36.980 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:25:37.238 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:25:37.238 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:25:37.238 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:25:37.238 [91/267] Linking static target lib/librte_eal.a 00:25:37.238 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:25:37.238 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:25:37.238 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:25:37.238 [95/267] Linking static target lib/librte_rcu.a 00:25:37.238 [96/267] Linking static target lib/librte_mempool.a 00:25:37.496 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:25:37.496 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:25:37.496 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.496 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:25:37.754 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:25:37.754 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:25:37.754 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:25:37.754 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:25:37.754 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:25:37.754 [106/267] Linking static target lib/librte_net.a 00:25:37.754 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:25:37.754 [108/267] Linking static target lib/librte_meter.a 00:25:37.754 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:25:37.754 [110/267] Linking static target lib/librte_mbuf.a 00:25:38.012 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:25:38.012 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:25:38.012 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:25:38.012 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:25:38.271 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:25:38.271 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:25:38.271 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:25:38.528 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:25:38.528 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:25:38.528 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:25:38.787 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:25:38.787 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:25:38.787 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:25:38.787 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:25:38.787 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:25:38.787 [126/267] Linking static target lib/librte_pci.a 00:25:39.045 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:25:39.045 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:25:39.045 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:25:39.045 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:25:39.045 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:25:39.314 [132/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:25:39.314 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:25:39.314 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:25:39.314 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:25:39.314 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:25:39.314 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:25:39.314 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:25:39.314 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:25:39.314 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:25:39.314 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:25:39.314 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:25:39.314 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:25:39.314 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:25:39.314 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:25:39.573 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:25:39.573 [147/267] Linking static target lib/librte_cmdline.a 00:25:39.573 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:25:39.832 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:25:39.832 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:25:39.832 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:25:39.832 [152/267] Linking static target lib/librte_timer.a 00:25:39.832 [153/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:25:40.090 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:25:40.090 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:25:40.090 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:25:40.090 [157/267] Linking static target lib/librte_ethdev.a 00:25:40.090 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:25:40.091 [159/267] Linking static target lib/librte_compressdev.a 00:25:40.091 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:25:40.359 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:25:40.359 [162/267] Linking static target lib/librte_hash.a 00:25:40.359 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:25:40.359 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:25:40.359 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:25:40.359 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:25:40.359 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:25:40.359 [168/267] Linking static target lib/librte_dmadev.a 00:25:40.617 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:25:40.617 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:25:40.617 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:25:40.617 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:25:40.876 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:25:40.876 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:41.135 [175/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:41.135 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:25:41.135 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:25:41.135 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:25:41.135 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:25:41.135 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:25:41.135 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:25:41.135 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:25:41.135 [183/267] Linking static target lib/librte_power.a 00:25:41.135 [184/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:25:41.135 [185/267] Linking static target lib/librte_cryptodev.a 00:25:41.394 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:25:41.652 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:25:41.652 [188/267] Linking static target lib/librte_reorder.a 00:25:41.652 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:25:41.652 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:25:41.652 [191/267] Linking static target lib/librte_security.a 00:25:41.652 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:25:41.908 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:25:41.908 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:25:42.170 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:25:42.170 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:25:42.170 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:25:42.451 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:25:42.451 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:25:42.451 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:25:42.451 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:25:42.709 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:25:42.709 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:25:42.709 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:25:42.709 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:25:42.709 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:25:42.709 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:25:42.967 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:25:42.967 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:25:42.967 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:25:42.967 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:42.967 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:25:42.967 [213/267] Linking static target drivers/librte_bus_vdev.a 00:25:43.225 [214/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:25:43.225 [215/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:43.225 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:25:43.225 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:25:43.225 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:43.225 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:25:43.225 [220/267] Linking static target drivers/librte_bus_pci.a 00:25:43.225 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:25:43.225 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:43.225 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:25:43.225 [224/267] Linking static target drivers/librte_mempool_ring.a 00:25:43.482 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:43.741 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:25:43.741 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:25:45.115 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:25:45.115 [229/267] Linking target lib/librte_eal.so.24.1 00:25:45.115 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:25:45.115 [231/267] Linking target lib/librte_pci.so.24.1 00:25:45.115 [232/267] Linking target lib/librte_ring.so.24.1 00:25:45.115 [233/267] Linking target lib/librte_dmadev.so.24.1 00:25:45.115 [234/267] Linking target lib/librte_meter.so.24.1 00:25:45.115 [235/267] Linking target lib/librte_timer.so.24.1 00:25:45.115 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:25:45.115 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:25:45.115 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:25:45.115 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:25:45.373 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:25:45.373 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:25:45.373 [242/267] Linking target lib/librte_mempool.so.24.1 00:25:45.373 [243/267] Linking target lib/librte_rcu.so.24.1 00:25:45.373 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:25:45.373 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:25:45.373 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:25:45.373 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:25:45.373 [248/267] Linking target lib/librte_mbuf.so.24.1 00:25:45.630 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:25:45.630 [250/267] Linking target lib/librte_reorder.so.24.1 00:25:45.630 [251/267] Linking target lib/librte_compressdev.so.24.1 00:25:45.631 [252/267] Linking target lib/librte_net.so.24.1 00:25:45.631 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:25:45.631 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:25:45.631 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:25:45.892 [256/267] Linking target lib/librte_security.so.24.1 00:25:45.892 [257/267] Linking target lib/librte_hash.so.24.1 00:25:45.892 [258/267] Linking target lib/librte_cmdline.so.24.1 00:25:45.892 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:45.892 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:25:45.892 [261/267] Linking target lib/librte_ethdev.so.24.1 00:25:46.150 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:25:46.150 [263/267] Linking target lib/librte_power.so.24.1 00:25:46.715 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:25:46.715 [265/267] Linking static target lib/librte_vhost.a 00:25:48.087 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:25:48.087 [267/267] Linking target lib/librte_vhost.so.24.1 00:25:48.345 INFO: autodetecting backend as ninja 00:25:48.345 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:26:03.230 CC lib/ut_mock/mock.o 00:26:03.230 CC lib/log/log_flags.o 00:26:03.230 CC lib/log/log.o 00:26:03.230 CC lib/log/log_deprecated.o 00:26:03.230 CC lib/ut/ut.o 00:26:03.231 LIB libspdk_ut_mock.a 00:26:03.231 LIB libspdk_log.a 00:26:03.231 SO libspdk_ut_mock.so.6.0 00:26:03.231 LIB libspdk_ut.a 00:26:03.231 SO libspdk_log.so.7.1 00:26:03.231 SO libspdk_ut.so.2.0 00:26:03.231 SYMLINK libspdk_ut_mock.so 00:26:03.231 SYMLINK libspdk_log.so 00:26:03.231 SYMLINK libspdk_ut.so 00:26:03.231 CC lib/util/base64.o 00:26:03.231 CC lib/util/bit_array.o 00:26:03.231 CC lib/util/cpuset.o 00:26:03.231 CC lib/util/crc16.o 00:26:03.231 CC lib/util/crc32c.o 00:26:03.231 CC lib/util/crc32.o 00:26:03.231 CC lib/dma/dma.o 00:26:03.231 CXX lib/trace_parser/trace.o 00:26:03.231 CC lib/ioat/ioat.o 00:26:03.231 CC lib/vfio_user/host/vfio_user_pci.o 00:26:03.231 CC lib/util/crc32_ieee.o 00:26:03.231 CC lib/util/crc64.o 00:26:03.231 CC lib/util/dif.o 00:26:03.231 CC lib/util/fd.o 00:26:03.231 LIB libspdk_dma.a 00:26:03.231 SO libspdk_dma.so.5.0 00:26:03.231 CC lib/util/fd_group.o 00:26:03.231 CC lib/vfio_user/host/vfio_user.o 00:26:03.231 CC lib/util/file.o 00:26:03.231 CC lib/util/hexlify.o 00:26:03.231 SYMLINK libspdk_dma.so 00:26:03.231 CC lib/util/iov.o 00:26:03.231 CC lib/util/math.o 00:26:03.231 LIB libspdk_ioat.a 00:26:03.231 SO libspdk_ioat.so.7.0 00:26:03.231 CC lib/util/net.o 00:26:03.231 CC lib/util/pipe.o 00:26:03.231 CC lib/util/strerror_tls.o 00:26:03.231 SYMLINK libspdk_ioat.so 00:26:03.231 CC lib/util/string.o 00:26:03.231 LIB libspdk_vfio_user.a 00:26:03.231 CC lib/util/uuid.o 00:26:03.231 CC lib/util/xor.o 00:26:03.231 SO libspdk_vfio_user.so.5.0 00:26:03.231 CC lib/util/zipf.o 00:26:03.231 SYMLINK libspdk_vfio_user.so 00:26:03.231 CC lib/util/md5.o 00:26:03.231 LIB libspdk_util.a 00:26:03.231 SO libspdk_util.so.10.1 00:26:03.231 SYMLINK libspdk_util.so 00:26:03.231 LIB libspdk_trace_parser.a 00:26:03.231 SO libspdk_trace_parser.so.6.0 00:26:03.231 CC lib/json/json_util.o 00:26:03.231 CC lib/json/json_parse.o 00:26:03.231 CC lib/vmd/vmd.o 00:26:03.231 CC lib/json/json_write.o 00:26:03.231 CC lib/rdma_utils/rdma_utils.o 00:26:03.231 CC lib/rdma_provider/common.o 00:26:03.231 CC lib/env_dpdk/env.o 00:26:03.231 CC lib/conf/conf.o 00:26:03.231 CC lib/idxd/idxd.o 00:26:03.231 SYMLINK libspdk_trace_parser.so 00:26:03.231 CC lib/env_dpdk/memory.o 00:26:03.231 CC lib/rdma_provider/rdma_provider_verbs.o 00:26:03.231 LIB libspdk_conf.a 00:26:03.231 SO libspdk_conf.so.6.0 00:26:03.231 CC lib/env_dpdk/pci.o 00:26:03.231 CC lib/vmd/led.o 00:26:03.231 LIB libspdk_rdma_utils.a 00:26:03.231 SYMLINK libspdk_conf.so 00:26:03.231 CC lib/env_dpdk/init.o 00:26:03.231 SO libspdk_rdma_utils.so.1.0 00:26:03.231 LIB libspdk_json.a 00:26:03.488 LIB libspdk_rdma_provider.a 00:26:03.488 SO libspdk_json.so.6.0 00:26:03.488 SO libspdk_rdma_provider.so.6.0 00:26:03.488 SYMLINK libspdk_rdma_utils.so 00:26:03.488 CC lib/env_dpdk/threads.o 00:26:03.488 SYMLINK libspdk_rdma_provider.so 00:26:03.488 CC lib/env_dpdk/pci_ioat.o 00:26:03.488 CC lib/idxd/idxd_user.o 00:26:03.488 SYMLINK libspdk_json.so 00:26:03.488 CC lib/idxd/idxd_kernel.o 00:26:03.488 CC lib/env_dpdk/pci_virtio.o 00:26:03.488 CC lib/env_dpdk/pci_vmd.o 00:26:03.488 CC lib/env_dpdk/pci_idxd.o 00:26:03.488 CC lib/env_dpdk/pci_event.o 00:26:03.746 CC lib/env_dpdk/sigbus_handler.o 00:26:03.746 CC lib/env_dpdk/pci_dpdk.o 00:26:03.746 CC lib/env_dpdk/pci_dpdk_2207.o 00:26:03.746 CC lib/env_dpdk/pci_dpdk_2211.o 00:26:03.746 LIB libspdk_idxd.a 00:26:03.746 SO libspdk_idxd.so.12.1 00:26:03.746 LIB libspdk_vmd.a 00:26:03.746 SYMLINK libspdk_idxd.so 00:26:03.746 CC lib/jsonrpc/jsonrpc_server.o 00:26:03.746 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:26:03.746 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:26:03.746 CC lib/jsonrpc/jsonrpc_client.o 00:26:03.746 SO libspdk_vmd.so.6.0 00:26:03.746 SYMLINK libspdk_vmd.so 00:26:04.005 LIB libspdk_jsonrpc.a 00:26:04.005 SO libspdk_jsonrpc.so.6.0 00:26:04.263 SYMLINK libspdk_jsonrpc.so 00:26:04.263 CC lib/rpc/rpc.o 00:26:04.521 LIB libspdk_env_dpdk.a 00:26:04.521 SO libspdk_env_dpdk.so.15.1 00:26:04.521 LIB libspdk_rpc.a 00:26:04.521 SO libspdk_rpc.so.6.0 00:26:04.521 SYMLINK libspdk_env_dpdk.so 00:26:04.779 SYMLINK libspdk_rpc.so 00:26:04.779 CC lib/trace/trace.o 00:26:04.779 CC lib/trace/trace_flags.o 00:26:04.779 CC lib/trace/trace_rpc.o 00:26:04.779 CC lib/notify/notify.o 00:26:04.779 CC lib/notify/notify_rpc.o 00:26:04.779 CC lib/keyring/keyring.o 00:26:04.779 CC lib/keyring/keyring_rpc.o 00:26:05.038 LIB libspdk_notify.a 00:26:05.038 SO libspdk_notify.so.6.0 00:26:05.038 SYMLINK libspdk_notify.so 00:26:05.038 LIB libspdk_keyring.a 00:26:05.038 LIB libspdk_trace.a 00:26:05.038 SO libspdk_keyring.so.2.0 00:26:05.038 SO libspdk_trace.so.11.0 00:26:05.038 SYMLINK libspdk_keyring.so 00:26:05.038 SYMLINK libspdk_trace.so 00:26:05.295 CC lib/sock/sock.o 00:26:05.295 CC lib/sock/sock_rpc.o 00:26:05.295 CC lib/thread/iobuf.o 00:26:05.295 CC lib/thread/thread.o 00:26:05.900 LIB libspdk_sock.a 00:26:05.900 SO libspdk_sock.so.10.0 00:26:05.900 SYMLINK libspdk_sock.so 00:26:06.157 CC lib/nvme/nvme_ctrlr_cmd.o 00:26:06.157 CC lib/nvme/nvme_fabric.o 00:26:06.157 CC lib/nvme/nvme_ctrlr.o 00:26:06.157 CC lib/nvme/nvme_ns_cmd.o 00:26:06.157 CC lib/nvme/nvme_ns.o 00:26:06.157 CC lib/nvme/nvme_pcie_common.o 00:26:06.157 CC lib/nvme/nvme_qpair.o 00:26:06.157 CC lib/nvme/nvme.o 00:26:06.157 CC lib/nvme/nvme_pcie.o 00:26:06.722 CC lib/nvme/nvme_quirks.o 00:26:06.722 CC lib/nvme/nvme_transport.o 00:26:06.722 CC lib/nvme/nvme_discovery.o 00:26:06.722 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:26:06.979 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:26:06.979 LIB libspdk_thread.a 00:26:06.979 SO libspdk_thread.so.11.0 00:26:06.979 CC lib/nvme/nvme_tcp.o 00:26:06.979 CC lib/nvme/nvme_opal.o 00:26:06.979 CC lib/nvme/nvme_io_msg.o 00:26:06.979 SYMLINK libspdk_thread.so 00:26:06.979 CC lib/nvme/nvme_poll_group.o 00:26:06.979 CC lib/nvme/nvme_zns.o 00:26:07.238 CC lib/accel/accel.o 00:26:07.238 CC lib/blob/blobstore.o 00:26:07.238 CC lib/init/json_config.o 00:26:07.496 CC lib/init/subsystem.o 00:26:07.496 CC lib/init/subsystem_rpc.o 00:26:07.496 CC lib/init/rpc.o 00:26:07.496 CC lib/accel/accel_rpc.o 00:26:07.496 CC lib/accel/accel_sw.o 00:26:07.753 LIB libspdk_init.a 00:26:07.753 CC lib/nvme/nvme_stubs.o 00:26:07.753 SO libspdk_init.so.6.0 00:26:07.753 CC lib/virtio/virtio.o 00:26:07.753 CC lib/fsdev/fsdev.o 00:26:07.753 SYMLINK libspdk_init.so 00:26:07.753 CC lib/virtio/virtio_vhost_user.o 00:26:07.753 CC lib/virtio/virtio_vfio_user.o 00:26:08.011 CC lib/fsdev/fsdev_io.o 00:26:08.011 CC lib/blob/request.o 00:26:08.011 CC lib/blob/zeroes.o 00:26:08.011 CC lib/blob/blob_bs_dev.o 00:26:08.011 CC lib/virtio/virtio_pci.o 00:26:08.011 CC lib/fsdev/fsdev_rpc.o 00:26:08.269 LIB libspdk_accel.a 00:26:08.269 SO libspdk_accel.so.16.0 00:26:08.269 CC lib/nvme/nvme_auth.o 00:26:08.269 SYMLINK libspdk_accel.so 00:26:08.269 CC lib/nvme/nvme_cuse.o 00:26:08.269 CC lib/nvme/nvme_rdma.o 00:26:08.269 LIB libspdk_virtio.a 00:26:08.269 CC lib/event/app.o 00:26:08.269 CC lib/event/reactor.o 00:26:08.269 CC lib/event/log_rpc.o 00:26:08.269 SO libspdk_virtio.so.7.0 00:26:08.269 LIB libspdk_fsdev.a 00:26:08.269 CC lib/bdev/bdev.o 00:26:08.527 SO libspdk_fsdev.so.2.0 00:26:08.527 SYMLINK libspdk_virtio.so 00:26:08.527 CC lib/bdev/bdev_rpc.o 00:26:08.527 CC lib/event/app_rpc.o 00:26:08.527 SYMLINK libspdk_fsdev.so 00:26:08.527 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:26:08.527 CC lib/event/scheduler_static.o 00:26:08.799 CC lib/bdev/bdev_zone.o 00:26:08.799 CC lib/bdev/part.o 00:26:08.799 CC lib/bdev/scsi_nvme.o 00:26:08.799 LIB libspdk_event.a 00:26:08.799 SO libspdk_event.so.14.0 00:26:08.799 SYMLINK libspdk_event.so 00:26:09.364 LIB libspdk_fuse_dispatcher.a 00:26:09.364 SO libspdk_fuse_dispatcher.so.1.0 00:26:09.364 SYMLINK libspdk_fuse_dispatcher.so 00:26:09.622 LIB libspdk_nvme.a 00:26:09.880 SO libspdk_nvme.so.15.0 00:26:09.880 SYMLINK libspdk_nvme.so 00:26:10.452 LIB libspdk_blob.a 00:26:10.710 SO libspdk_blob.so.11.0 00:26:10.710 SYMLINK libspdk_blob.so 00:26:10.971 CC lib/lvol/lvol.o 00:26:10.971 CC lib/blobfs/tree.o 00:26:10.971 CC lib/blobfs/blobfs.o 00:26:11.230 LIB libspdk_bdev.a 00:26:11.230 SO libspdk_bdev.so.17.0 00:26:11.488 SYMLINK libspdk_bdev.so 00:26:11.488 CC lib/nvmf/ctrlr.o 00:26:11.488 CC lib/nvmf/ctrlr_discovery.o 00:26:11.488 CC lib/nvmf/subsystem.o 00:26:11.488 CC lib/nvmf/ctrlr_bdev.o 00:26:11.488 CC lib/ublk/ublk.o 00:26:11.488 CC lib/ftl/ftl_core.o 00:26:11.488 CC lib/scsi/dev.o 00:26:11.488 CC lib/nbd/nbd.o 00:26:11.746 LIB libspdk_blobfs.a 00:26:11.746 SO libspdk_blobfs.so.10.0 00:26:11.746 SYMLINK libspdk_blobfs.so 00:26:11.746 CC lib/nbd/nbd_rpc.o 00:26:11.746 CC lib/scsi/lun.o 00:26:12.003 CC lib/ftl/ftl_init.o 00:26:12.003 CC lib/nvmf/nvmf.o 00:26:12.003 CC lib/scsi/port.o 00:26:12.003 LIB libspdk_lvol.a 00:26:12.003 LIB libspdk_nbd.a 00:26:12.003 SO libspdk_lvol.so.10.0 00:26:12.003 SO libspdk_nbd.so.7.0 00:26:12.003 CC lib/nvmf/nvmf_rpc.o 00:26:12.003 SYMLINK libspdk_lvol.so 00:26:12.003 CC lib/ublk/ublk_rpc.o 00:26:12.003 SYMLINK libspdk_nbd.so 00:26:12.003 CC lib/nvmf/transport.o 00:26:12.003 CC lib/scsi/scsi.o 00:26:12.003 CC lib/ftl/ftl_layout.o 00:26:12.260 CC lib/ftl/ftl_debug.o 00:26:12.260 LIB libspdk_ublk.a 00:26:12.260 CC lib/scsi/scsi_bdev.o 00:26:12.260 SO libspdk_ublk.so.3.0 00:26:12.260 CC lib/scsi/scsi_pr.o 00:26:12.260 SYMLINK libspdk_ublk.so 00:26:12.260 CC lib/scsi/scsi_rpc.o 00:26:12.260 CC lib/ftl/ftl_io.o 00:26:12.518 CC lib/ftl/ftl_sb.o 00:26:12.518 CC lib/ftl/ftl_l2p.o 00:26:12.518 CC lib/ftl/ftl_l2p_flat.o 00:26:12.518 CC lib/ftl/ftl_nv_cache.o 00:26:12.518 CC lib/ftl/ftl_band.o 00:26:12.518 CC lib/ftl/ftl_band_ops.o 00:26:12.518 CC lib/ftl/ftl_writer.o 00:26:12.775 CC lib/ftl/ftl_rq.o 00:26:12.775 CC lib/scsi/task.o 00:26:12.775 CC lib/ftl/ftl_reloc.o 00:26:12.775 CC lib/ftl/ftl_l2p_cache.o 00:26:12.775 CC lib/ftl/ftl_p2l.o 00:26:12.775 CC lib/nvmf/tcp.o 00:26:12.775 CC lib/nvmf/stubs.o 00:26:12.775 LIB libspdk_scsi.a 00:26:12.775 CC lib/ftl/ftl_p2l_log.o 00:26:13.033 SO libspdk_scsi.so.9.0 00:26:13.033 CC lib/ftl/mngt/ftl_mngt.o 00:26:13.033 SYMLINK libspdk_scsi.so 00:26:13.033 CC lib/nvmf/mdns_server.o 00:26:13.033 CC lib/nvmf/rdma.o 00:26:13.033 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_startup.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_md.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_misc.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:26:13.291 CC lib/nvmf/auth.o 00:26:13.291 CC lib/ftl/mngt/ftl_mngt_band.o 00:26:13.548 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:26:13.548 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:26:13.548 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:26:13.548 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:26:13.548 CC lib/iscsi/conn.o 00:26:13.548 CC lib/vhost/vhost.o 00:26:13.548 CC lib/vhost/vhost_rpc.o 00:26:13.808 CC lib/ftl/utils/ftl_conf.o 00:26:13.808 CC lib/iscsi/init_grp.o 00:26:13.808 CC lib/ftl/utils/ftl_md.o 00:26:13.808 CC lib/vhost/vhost_scsi.o 00:26:14.066 CC lib/vhost/vhost_blk.o 00:26:14.066 CC lib/iscsi/iscsi.o 00:26:14.066 CC lib/iscsi/param.o 00:26:14.066 CC lib/iscsi/portal_grp.o 00:26:14.066 CC lib/ftl/utils/ftl_mempool.o 00:26:14.323 CC lib/vhost/rte_vhost_user.o 00:26:14.323 CC lib/iscsi/tgt_node.o 00:26:14.323 CC lib/iscsi/iscsi_subsystem.o 00:26:14.323 CC lib/iscsi/iscsi_rpc.o 00:26:14.324 CC lib/ftl/utils/ftl_bitmap.o 00:26:14.582 CC lib/iscsi/task.o 00:26:14.582 CC lib/ftl/utils/ftl_property.o 00:26:14.582 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:26:14.582 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:26:14.582 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:26:14.582 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:26:14.840 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:26:14.840 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:26:14.840 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:26:14.840 CC lib/ftl/upgrade/ftl_sb_v3.o 00:26:14.840 CC lib/ftl/upgrade/ftl_sb_v5.o 00:26:14.840 CC lib/ftl/nvc/ftl_nvc_dev.o 00:26:14.840 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:26:14.840 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:26:14.840 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:26:14.840 CC lib/ftl/base/ftl_base_dev.o 00:26:14.840 CC lib/ftl/base/ftl_base_bdev.o 00:26:15.100 CC lib/ftl/ftl_trace.o 00:26:15.100 LIB libspdk_iscsi.a 00:26:15.100 LIB libspdk_ftl.a 00:26:15.100 LIB libspdk_vhost.a 00:26:15.100 LIB libspdk_nvmf.a 00:26:15.100 SO libspdk_iscsi.so.8.0 00:26:15.100 SO libspdk_vhost.so.8.0 00:26:15.358 SYMLINK libspdk_vhost.so 00:26:15.358 SO libspdk_nvmf.so.20.0 00:26:15.358 SO libspdk_ftl.so.9.0 00:26:15.358 SYMLINK libspdk_iscsi.so 00:26:15.358 SYMLINK libspdk_nvmf.so 00:26:15.615 SYMLINK libspdk_ftl.so 00:26:15.911 CC module/env_dpdk/env_dpdk_rpc.o 00:26:15.911 CC module/sock/posix/posix.o 00:26:15.911 CC module/accel/error/accel_error.o 00:26:15.911 CC module/keyring/file/keyring.o 00:26:15.911 CC module/keyring/linux/keyring.o 00:26:15.911 CC module/accel/ioat/accel_ioat.o 00:26:15.911 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:26:15.911 CC module/fsdev/aio/fsdev_aio.o 00:26:15.911 CC module/scheduler/dynamic/scheduler_dynamic.o 00:26:15.911 CC module/blob/bdev/blob_bdev.o 00:26:15.911 LIB libspdk_env_dpdk_rpc.a 00:26:15.911 SO libspdk_env_dpdk_rpc.so.6.0 00:26:16.170 CC module/keyring/linux/keyring_rpc.o 00:26:16.170 LIB libspdk_scheduler_dpdk_governor.a 00:26:16.170 CC module/accel/error/accel_error_rpc.o 00:26:16.170 SO libspdk_scheduler_dpdk_governor.so.4.0 00:26:16.170 SYMLINK libspdk_env_dpdk_rpc.so 00:26:16.170 CC module/keyring/file/keyring_rpc.o 00:26:16.170 CC module/accel/ioat/accel_ioat_rpc.o 00:26:16.170 LIB libspdk_scheduler_dynamic.a 00:26:16.170 SYMLINK libspdk_scheduler_dpdk_governor.so 00:26:16.170 SO libspdk_scheduler_dynamic.so.4.0 00:26:16.170 LIB libspdk_keyring_linux.a 00:26:16.170 SYMLINK libspdk_scheduler_dynamic.so 00:26:16.170 CC module/fsdev/aio/fsdev_aio_rpc.o 00:26:16.170 SO libspdk_keyring_linux.so.1.0 00:26:16.170 LIB libspdk_blob_bdev.a 00:26:16.170 LIB libspdk_keyring_file.a 00:26:16.170 LIB libspdk_accel_ioat.a 00:26:16.170 LIB libspdk_accel_error.a 00:26:16.170 SO libspdk_blob_bdev.so.11.0 00:26:16.170 SO libspdk_keyring_file.so.2.0 00:26:16.170 SO libspdk_accel_ioat.so.6.0 00:26:16.170 SYMLINK libspdk_keyring_linux.so 00:26:16.170 SO libspdk_accel_error.so.2.0 00:26:16.170 SYMLINK libspdk_blob_bdev.so 00:26:16.170 SYMLINK libspdk_accel_ioat.so 00:26:16.170 CC module/scheduler/gscheduler/gscheduler.o 00:26:16.170 SYMLINK libspdk_keyring_file.so 00:26:16.170 CC module/accel/dsa/accel_dsa.o 00:26:16.170 CC module/accel/dsa/accel_dsa_rpc.o 00:26:16.170 SYMLINK libspdk_accel_error.so 00:26:16.428 CC module/fsdev/aio/linux_aio_mgr.o 00:26:16.428 LIB libspdk_scheduler_gscheduler.a 00:26:16.428 CC module/accel/iaa/accel_iaa.o 00:26:16.428 SO libspdk_scheduler_gscheduler.so.4.0 00:26:16.428 CC module/accel/iaa/accel_iaa_rpc.o 00:26:16.428 CC module/bdev/error/vbdev_error.o 00:26:16.428 SYMLINK libspdk_scheduler_gscheduler.so 00:26:16.428 CC module/bdev/error/vbdev_error_rpc.o 00:26:16.428 CC module/blobfs/bdev/blobfs_bdev.o 00:26:16.428 CC module/bdev/delay/vbdev_delay.o 00:26:16.428 CC module/bdev/gpt/gpt.o 00:26:16.686 LIB libspdk_accel_dsa.a 00:26:16.686 CC module/bdev/delay/vbdev_delay_rpc.o 00:26:16.686 SO libspdk_accel_dsa.so.5.0 00:26:16.686 LIB libspdk_accel_iaa.a 00:26:16.686 SO libspdk_accel_iaa.so.3.0 00:26:16.686 SYMLINK libspdk_accel_dsa.so 00:26:16.686 LIB libspdk_sock_posix.a 00:26:16.686 SYMLINK libspdk_accel_iaa.so 00:26:16.686 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:26:16.686 SO libspdk_sock_posix.so.6.0 00:26:16.686 LIB libspdk_fsdev_aio.a 00:26:16.686 CC module/bdev/gpt/vbdev_gpt.o 00:26:16.687 SO libspdk_fsdev_aio.so.1.0 00:26:16.687 SYMLINK libspdk_sock_posix.so 00:26:16.687 LIB libspdk_bdev_error.a 00:26:16.687 CC module/bdev/lvol/vbdev_lvol.o 00:26:16.687 SYMLINK libspdk_fsdev_aio.so 00:26:16.687 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:26:16.687 SO libspdk_bdev_error.so.6.0 00:26:16.687 LIB libspdk_bdev_delay.a 00:26:16.687 CC module/bdev/malloc/bdev_malloc.o 00:26:16.687 LIB libspdk_blobfs_bdev.a 00:26:16.687 CC module/bdev/null/bdev_null.o 00:26:16.687 SO libspdk_blobfs_bdev.so.6.0 00:26:16.945 SO libspdk_bdev_delay.so.6.0 00:26:16.945 SYMLINK libspdk_bdev_error.so 00:26:16.945 CC module/bdev/malloc/bdev_malloc_rpc.o 00:26:16.945 SYMLINK libspdk_blobfs_bdev.so 00:26:16.945 CC module/bdev/null/bdev_null_rpc.o 00:26:16.945 SYMLINK libspdk_bdev_delay.so 00:26:16.945 CC module/bdev/nvme/bdev_nvme.o 00:26:16.945 CC module/bdev/passthru/vbdev_passthru.o 00:26:16.945 LIB libspdk_bdev_gpt.a 00:26:16.945 SO libspdk_bdev_gpt.so.6.0 00:26:16.945 CC module/bdev/raid/bdev_raid.o 00:26:16.945 LIB libspdk_bdev_null.a 00:26:16.945 SO libspdk_bdev_null.so.6.0 00:26:16.945 SYMLINK libspdk_bdev_gpt.so 00:26:17.203 CC module/bdev/split/vbdev_split.o 00:26:17.203 SYMLINK libspdk_bdev_null.so 00:26:17.203 CC module/bdev/raid/bdev_raid_rpc.o 00:26:17.203 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:26:17.203 CC module/bdev/zone_block/vbdev_zone_block.o 00:26:17.203 LIB libspdk_bdev_malloc.a 00:26:17.203 SO libspdk_bdev_malloc.so.6.0 00:26:17.203 CC module/bdev/xnvme/bdev_xnvme.o 00:26:17.203 SYMLINK libspdk_bdev_malloc.so 00:26:17.203 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:26:17.203 CC module/bdev/aio/bdev_aio.o 00:26:17.203 LIB libspdk_bdev_lvol.a 00:26:17.203 LIB libspdk_bdev_passthru.a 00:26:17.203 SO libspdk_bdev_lvol.so.6.0 00:26:17.203 CC module/bdev/split/vbdev_split_rpc.o 00:26:17.203 SO libspdk_bdev_passthru.so.6.0 00:26:17.460 CC module/bdev/raid/bdev_raid_sb.o 00:26:17.460 CC module/bdev/raid/raid0.o 00:26:17.460 SYMLINK libspdk_bdev_lvol.so 00:26:17.460 CC module/bdev/raid/raid1.o 00:26:17.460 SYMLINK libspdk_bdev_passthru.so 00:26:17.460 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:26:17.460 LIB libspdk_bdev_zone_block.a 00:26:17.460 SO libspdk_bdev_zone_block.so.6.0 00:26:17.460 LIB libspdk_bdev_split.a 00:26:17.460 SO libspdk_bdev_split.so.6.0 00:26:17.460 LIB libspdk_bdev_xnvme.a 00:26:17.460 SO libspdk_bdev_xnvme.so.3.0 00:26:17.460 SYMLINK libspdk_bdev_zone_block.so 00:26:17.460 SYMLINK libspdk_bdev_split.so 00:26:17.460 CC module/bdev/aio/bdev_aio_rpc.o 00:26:17.460 SYMLINK libspdk_bdev_xnvme.so 00:26:17.460 CC module/bdev/nvme/bdev_nvme_rpc.o 00:26:17.460 CC module/bdev/nvme/nvme_rpc.o 00:26:17.717 CC module/bdev/raid/concat.o 00:26:17.717 CC module/bdev/ftl/bdev_ftl.o 00:26:17.717 CC module/bdev/ftl/bdev_ftl_rpc.o 00:26:17.717 LIB libspdk_bdev_aio.a 00:26:17.717 CC module/bdev/iscsi/bdev_iscsi.o 00:26:17.717 SO libspdk_bdev_aio.so.6.0 00:26:17.717 CC module/bdev/virtio/bdev_virtio_scsi.o 00:26:17.717 SYMLINK libspdk_bdev_aio.so 00:26:17.717 CC module/bdev/virtio/bdev_virtio_blk.o 00:26:17.717 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:26:17.717 CC module/bdev/nvme/bdev_mdns_client.o 00:26:17.975 CC module/bdev/nvme/vbdev_opal.o 00:26:17.975 LIB libspdk_bdev_ftl.a 00:26:17.975 CC module/bdev/nvme/vbdev_opal_rpc.o 00:26:17.975 SO libspdk_bdev_ftl.so.6.0 00:26:17.975 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:26:17.975 LIB libspdk_bdev_raid.a 00:26:17.975 CC module/bdev/virtio/bdev_virtio_rpc.o 00:26:17.975 SYMLINK libspdk_bdev_ftl.so 00:26:17.975 SO libspdk_bdev_raid.so.6.0 00:26:17.975 LIB libspdk_bdev_iscsi.a 00:26:17.975 SO libspdk_bdev_iscsi.so.6.0 00:26:17.975 SYMLINK libspdk_bdev_raid.so 00:26:18.232 SYMLINK libspdk_bdev_iscsi.so 00:26:18.232 LIB libspdk_bdev_virtio.a 00:26:18.232 SO libspdk_bdev_virtio.so.6.0 00:26:18.232 SYMLINK libspdk_bdev_virtio.so 00:26:19.603 LIB libspdk_bdev_nvme.a 00:26:19.603 SO libspdk_bdev_nvme.so.7.1 00:26:19.603 SYMLINK libspdk_bdev_nvme.so 00:26:20.174 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:20.174 CC module/event/subsystems/vmd/vmd.o 00:26:20.174 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:20.174 CC module/event/subsystems/iobuf/iobuf.o 00:26:20.174 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:20.174 CC module/event/subsystems/keyring/keyring.o 00:26:20.174 CC module/event/subsystems/sock/sock.o 00:26:20.174 CC module/event/subsystems/scheduler/scheduler.o 00:26:20.174 CC module/event/subsystems/fsdev/fsdev.o 00:26:20.174 LIB libspdk_event_vhost_blk.a 00:26:20.174 LIB libspdk_event_scheduler.a 00:26:20.174 LIB libspdk_event_vmd.a 00:26:20.174 LIB libspdk_event_keyring.a 00:26:20.174 LIB libspdk_event_fsdev.a 00:26:20.174 SO libspdk_event_vhost_blk.so.3.0 00:26:20.174 SO libspdk_event_scheduler.so.4.0 00:26:20.174 LIB libspdk_event_sock.a 00:26:20.174 SO libspdk_event_vmd.so.6.0 00:26:20.174 SO libspdk_event_keyring.so.1.0 00:26:20.174 LIB libspdk_event_iobuf.a 00:26:20.174 SO libspdk_event_fsdev.so.1.0 00:26:20.174 SO libspdk_event_sock.so.5.0 00:26:20.174 SYMLINK libspdk_event_vhost_blk.so 00:26:20.174 SO libspdk_event_iobuf.so.3.0 00:26:20.174 SYMLINK libspdk_event_scheduler.so 00:26:20.174 SYMLINK libspdk_event_keyring.so 00:26:20.174 SYMLINK libspdk_event_vmd.so 00:26:20.174 SYMLINK libspdk_event_sock.so 00:26:20.174 SYMLINK libspdk_event_fsdev.so 00:26:20.174 SYMLINK libspdk_event_iobuf.so 00:26:20.435 CC module/event/subsystems/accel/accel.o 00:26:20.435 LIB libspdk_event_accel.a 00:26:20.707 SO libspdk_event_accel.so.6.0 00:26:20.707 SYMLINK libspdk_event_accel.so 00:26:20.964 CC module/event/subsystems/bdev/bdev.o 00:26:20.964 LIB libspdk_event_bdev.a 00:26:20.964 SO libspdk_event_bdev.so.6.0 00:26:20.964 SYMLINK libspdk_event_bdev.so 00:26:21.222 CC module/event/subsystems/scsi/scsi.o 00:26:21.222 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:21.222 CC module/event/subsystems/ublk/ublk.o 00:26:21.222 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:21.222 CC module/event/subsystems/nbd/nbd.o 00:26:21.222 LIB libspdk_event_nbd.a 00:26:21.222 LIB libspdk_event_ublk.a 00:26:21.222 LIB libspdk_event_scsi.a 00:26:21.222 SO libspdk_event_ublk.so.3.0 00:26:21.222 SO libspdk_event_nbd.so.6.0 00:26:21.480 SO libspdk_event_scsi.so.6.0 00:26:21.480 SYMLINK libspdk_event_ublk.so 00:26:21.480 SYMLINK libspdk_event_nbd.so 00:26:21.480 SYMLINK libspdk_event_scsi.so 00:26:21.480 LIB libspdk_event_nvmf.a 00:26:21.480 SO libspdk_event_nvmf.so.6.0 00:26:21.480 SYMLINK libspdk_event_nvmf.so 00:26:21.480 CC module/event/subsystems/iscsi/iscsi.o 00:26:21.480 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:21.738 LIB libspdk_event_iscsi.a 00:26:21.738 LIB libspdk_event_vhost_scsi.a 00:26:21.738 SO libspdk_event_iscsi.so.6.0 00:26:21.738 SO libspdk_event_vhost_scsi.so.3.0 00:26:21.738 SYMLINK libspdk_event_iscsi.so 00:26:21.738 SYMLINK libspdk_event_vhost_scsi.so 00:26:21.996 SO libspdk.so.6.0 00:26:21.996 SYMLINK libspdk.so 00:26:21.996 CC app/spdk_lspci/spdk_lspci.o 00:26:21.996 CC app/trace_record/trace_record.o 00:26:21.996 CC app/spdk_nvme_identify/identify.o 00:26:21.996 CXX app/trace/trace.o 00:26:21.996 CC app/spdk_nvme_perf/perf.o 00:26:21.996 CC app/iscsi_tgt/iscsi_tgt.o 00:26:21.996 CC app/nvmf_tgt/nvmf_main.o 00:26:21.996 CC app/spdk_tgt/spdk_tgt.o 00:26:22.253 CC test/thread/poller_perf/poller_perf.o 00:26:22.253 CC examples/util/zipf/zipf.o 00:26:22.253 LINK spdk_lspci 00:26:22.253 LINK poller_perf 00:26:22.253 LINK nvmf_tgt 00:26:22.253 LINK spdk_tgt 00:26:22.253 LINK zipf 00:26:22.253 LINK iscsi_tgt 00:26:22.253 LINK spdk_trace_record 00:26:22.511 LINK spdk_trace 00:26:22.511 CC app/spdk_nvme_discover/discovery_aer.o 00:26:22.511 CC examples/ioat/perf/perf.o 00:26:22.511 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:22.511 CC test/dma/test_dma/test_dma.o 00:26:22.511 LINK spdk_nvme_discover 00:26:22.511 CC examples/sock/hello_world/hello_sock.o 00:26:22.511 CC examples/vmd/lsvmd/lsvmd.o 00:26:22.511 CC examples/thread/thread/thread_ex.o 00:26:22.768 LINK ioat_perf 00:26:22.768 LINK interrupt_tgt 00:26:22.768 CC examples/idxd/perf/perf.o 00:26:22.768 LINK lsvmd 00:26:22.768 CC examples/ioat/verify/verify.o 00:26:22.768 LINK spdk_nvme_perf 00:26:22.768 LINK hello_sock 00:26:22.768 CC examples/vmd/led/led.o 00:26:22.768 LINK spdk_nvme_identify 00:26:22.768 LINK thread 00:26:23.026 CC test/app/bdev_svc/bdev_svc.o 00:26:23.026 LINK led 00:26:23.026 CC app/spdk_top/spdk_top.o 00:26:23.026 LINK verify 00:26:23.026 LINK idxd_perf 00:26:23.026 CC test/app/histogram_perf/histogram_perf.o 00:26:23.026 LINK test_dma 00:26:23.026 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:26:23.026 LINK bdev_svc 00:26:23.026 CC test/app/jsoncat/jsoncat.o 00:26:23.026 CC test/app/stub/stub.o 00:26:23.026 LINK histogram_perf 00:26:23.283 LINK jsoncat 00:26:23.283 CC examples/nvme/hello_world/hello_world.o 00:26:23.283 CC examples/accel/perf/accel_perf.o 00:26:23.283 CC examples/fsdev/hello_world/hello_fsdev.o 00:26:23.283 LINK stub 00:26:23.283 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:26:23.283 CC examples/nvme/reconnect/reconnect.o 00:26:23.283 CC examples/blob/hello_world/hello_blob.o 00:26:23.283 CC examples/blob/cli/blobcli.o 00:26:23.540 LINK hello_world 00:26:23.540 LINK nvme_fuzz 00:26:23.540 LINK hello_fsdev 00:26:23.540 CC app/vhost/vhost.o 00:26:23.540 LINK hello_blob 00:26:23.540 TEST_HEADER include/spdk/accel.h 00:26:23.540 TEST_HEADER include/spdk/accel_module.h 00:26:23.540 TEST_HEADER include/spdk/assert.h 00:26:23.540 TEST_HEADER include/spdk/barrier.h 00:26:23.540 TEST_HEADER include/spdk/base64.h 00:26:23.540 TEST_HEADER include/spdk/bdev.h 00:26:23.540 TEST_HEADER include/spdk/bdev_module.h 00:26:23.540 TEST_HEADER include/spdk/bdev_zone.h 00:26:23.540 TEST_HEADER include/spdk/bit_array.h 00:26:23.540 LINK reconnect 00:26:23.540 TEST_HEADER include/spdk/bit_pool.h 00:26:23.540 TEST_HEADER include/spdk/blob_bdev.h 00:26:23.540 TEST_HEADER include/spdk/blobfs_bdev.h 00:26:23.540 TEST_HEADER include/spdk/blobfs.h 00:26:23.541 TEST_HEADER include/spdk/blob.h 00:26:23.541 TEST_HEADER include/spdk/conf.h 00:26:23.541 TEST_HEADER include/spdk/config.h 00:26:23.541 TEST_HEADER include/spdk/cpuset.h 00:26:23.541 TEST_HEADER include/spdk/crc16.h 00:26:23.541 TEST_HEADER include/spdk/crc32.h 00:26:23.541 TEST_HEADER include/spdk/crc64.h 00:26:23.541 TEST_HEADER include/spdk/dif.h 00:26:23.541 TEST_HEADER include/spdk/dma.h 00:26:23.541 TEST_HEADER include/spdk/endian.h 00:26:23.541 TEST_HEADER include/spdk/env_dpdk.h 00:26:23.541 TEST_HEADER include/spdk/env.h 00:26:23.541 TEST_HEADER include/spdk/event.h 00:26:23.541 TEST_HEADER include/spdk/fd_group.h 00:26:23.541 TEST_HEADER include/spdk/fd.h 00:26:23.541 TEST_HEADER include/spdk/file.h 00:26:23.541 TEST_HEADER include/spdk/fsdev.h 00:26:23.541 TEST_HEADER include/spdk/fsdev_module.h 00:26:23.541 TEST_HEADER include/spdk/ftl.h 00:26:23.541 TEST_HEADER include/spdk/fuse_dispatcher.h 00:26:23.541 TEST_HEADER include/spdk/gpt_spec.h 00:26:23.541 TEST_HEADER include/spdk/hexlify.h 00:26:23.541 TEST_HEADER include/spdk/histogram_data.h 00:26:23.798 TEST_HEADER include/spdk/idxd.h 00:26:23.798 TEST_HEADER include/spdk/idxd_spec.h 00:26:23.798 LINK spdk_top 00:26:23.798 TEST_HEADER include/spdk/init.h 00:26:23.798 TEST_HEADER include/spdk/ioat.h 00:26:23.798 TEST_HEADER include/spdk/ioat_spec.h 00:26:23.798 TEST_HEADER include/spdk/iscsi_spec.h 00:26:23.798 LINK vhost 00:26:23.798 TEST_HEADER include/spdk/json.h 00:26:23.798 TEST_HEADER include/spdk/jsonrpc.h 00:26:23.798 TEST_HEADER include/spdk/keyring.h 00:26:23.798 TEST_HEADER include/spdk/keyring_module.h 00:26:23.798 TEST_HEADER include/spdk/likely.h 00:26:23.798 TEST_HEADER include/spdk/log.h 00:26:23.798 TEST_HEADER include/spdk/lvol.h 00:26:23.798 TEST_HEADER include/spdk/md5.h 00:26:23.798 TEST_HEADER include/spdk/memory.h 00:26:23.798 TEST_HEADER include/spdk/mmio.h 00:26:23.798 TEST_HEADER include/spdk/nbd.h 00:26:23.798 TEST_HEADER include/spdk/net.h 00:26:23.798 TEST_HEADER include/spdk/notify.h 00:26:23.798 TEST_HEADER include/spdk/nvme.h 00:26:23.798 TEST_HEADER include/spdk/nvme_intel.h 00:26:23.798 TEST_HEADER include/spdk/nvme_ocssd.h 00:26:23.798 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:26:23.798 TEST_HEADER include/spdk/nvme_spec.h 00:26:23.798 TEST_HEADER include/spdk/nvme_zns.h 00:26:23.798 TEST_HEADER include/spdk/nvmf_cmd.h 00:26:23.798 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:26:23.798 TEST_HEADER include/spdk/nvmf.h 00:26:23.798 TEST_HEADER include/spdk/nvmf_spec.h 00:26:23.798 TEST_HEADER include/spdk/nvmf_transport.h 00:26:23.798 TEST_HEADER include/spdk/opal.h 00:26:23.798 TEST_HEADER include/spdk/opal_spec.h 00:26:23.798 TEST_HEADER include/spdk/pci_ids.h 00:26:23.798 TEST_HEADER include/spdk/pipe.h 00:26:23.798 TEST_HEADER include/spdk/queue.h 00:26:23.798 TEST_HEADER include/spdk/reduce.h 00:26:23.798 TEST_HEADER include/spdk/rpc.h 00:26:23.798 TEST_HEADER include/spdk/scheduler.h 00:26:23.798 TEST_HEADER include/spdk/scsi.h 00:26:23.798 TEST_HEADER include/spdk/scsi_spec.h 00:26:23.798 TEST_HEADER include/spdk/sock.h 00:26:23.798 TEST_HEADER include/spdk/stdinc.h 00:26:23.798 TEST_HEADER include/spdk/string.h 00:26:23.798 TEST_HEADER include/spdk/thread.h 00:26:23.798 TEST_HEADER include/spdk/trace.h 00:26:23.798 LINK accel_perf 00:26:23.798 TEST_HEADER include/spdk/trace_parser.h 00:26:23.798 TEST_HEADER include/spdk/tree.h 00:26:23.798 TEST_HEADER include/spdk/ublk.h 00:26:23.798 TEST_HEADER include/spdk/util.h 00:26:23.798 TEST_HEADER include/spdk/uuid.h 00:26:23.798 TEST_HEADER include/spdk/version.h 00:26:23.798 TEST_HEADER include/spdk/vfio_user_pci.h 00:26:23.798 TEST_HEADER include/spdk/vfio_user_spec.h 00:26:23.798 TEST_HEADER include/spdk/vhost.h 00:26:23.798 CC test/event/event_perf/event_perf.o 00:26:23.798 TEST_HEADER include/spdk/vmd.h 00:26:23.798 TEST_HEADER include/spdk/xor.h 00:26:23.798 CC test/env/mem_callbacks/mem_callbacks.o 00:26:23.798 TEST_HEADER include/spdk/zipf.h 00:26:23.798 CXX test/cpp_headers/accel.o 00:26:23.798 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:26:23.798 LINK blobcli 00:26:23.798 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:23.798 LINK event_perf 00:26:23.798 CC app/spdk_dd/spdk_dd.o 00:26:24.055 CXX test/cpp_headers/accel_module.o 00:26:24.055 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:26:24.055 CC examples/nvme/arbitration/arbitration.o 00:26:24.055 CC test/nvme/aer/aer.o 00:26:24.055 CC test/event/reactor/reactor.o 00:26:24.055 CXX test/cpp_headers/assert.o 00:26:24.055 CC examples/nvme/hotplug/hotplug.o 00:26:24.055 LINK mem_callbacks 00:26:24.055 LINK reactor 00:26:24.313 LINK spdk_dd 00:26:24.313 CXX test/cpp_headers/barrier.o 00:26:24.313 LINK aer 00:26:24.313 LINK arbitration 00:26:24.313 LINK hotplug 00:26:24.313 CC test/env/vtophys/vtophys.o 00:26:24.313 LINK vhost_fuzz 00:26:24.313 CC test/event/reactor_perf/reactor_perf.o 00:26:24.313 LINK nvme_manage 00:26:24.313 CXX test/cpp_headers/base64.o 00:26:24.570 LINK vtophys 00:26:24.571 CC test/event/app_repeat/app_repeat.o 00:26:24.571 CC test/nvme/reset/reset.o 00:26:24.571 CC app/fio/nvme/fio_plugin.o 00:26:24.571 LINK reactor_perf 00:26:24.571 CXX test/cpp_headers/bdev.o 00:26:24.571 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:24.571 CC test/event/scheduler/scheduler.o 00:26:24.571 CC test/nvme/sgl/sgl.o 00:26:24.571 LINK app_repeat 00:26:24.571 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:26:24.571 LINK cmb_copy 00:26:24.571 CXX test/cpp_headers/bdev_module.o 00:26:24.828 LINK scheduler 00:26:24.828 LINK reset 00:26:24.828 CC examples/bdev/hello_world/hello_bdev.o 00:26:24.828 CXX test/cpp_headers/bdev_zone.o 00:26:24.828 LINK env_dpdk_post_init 00:26:24.828 LINK sgl 00:26:24.828 CC examples/nvme/abort/abort.o 00:26:24.828 CXX test/cpp_headers/bit_array.o 00:26:24.828 CC test/rpc_client/rpc_client_test.o 00:26:24.828 LINK spdk_nvme 00:26:24.828 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:25.086 LINK iscsi_fuzz 00:26:25.086 LINK hello_bdev 00:26:25.086 CC test/nvme/e2edp/nvme_dp.o 00:26:25.086 CXX test/cpp_headers/bit_pool.o 00:26:25.086 CC test/env/memory/memory_ut.o 00:26:25.086 CC test/accel/dif/dif.o 00:26:25.086 LINK rpc_client_test 00:26:25.086 CC app/fio/bdev/fio_plugin.o 00:26:25.086 LINK pmr_persistence 00:26:25.086 CXX test/cpp_headers/blob_bdev.o 00:26:25.086 LINK abort 00:26:25.343 LINK nvme_dp 00:26:25.343 CC test/nvme/overhead/overhead.o 00:26:25.343 CXX test/cpp_headers/blobfs_bdev.o 00:26:25.343 CC examples/bdev/bdevperf/bdevperf.o 00:26:25.343 CC test/env/pci/pci_ut.o 00:26:25.343 CXX test/cpp_headers/blobfs.o 00:26:25.343 CC test/blobfs/mkfs/mkfs.o 00:26:25.343 CXX test/cpp_headers/blob.o 00:26:25.600 LINK spdk_bdev 00:26:25.600 CXX test/cpp_headers/conf.o 00:26:25.600 LINK overhead 00:26:25.600 CC test/lvol/esnap/esnap.o 00:26:25.600 LINK mkfs 00:26:25.600 CC test/nvme/err_injection/err_injection.o 00:26:25.600 CXX test/cpp_headers/config.o 00:26:25.600 CC test/nvme/startup/startup.o 00:26:25.600 CXX test/cpp_headers/cpuset.o 00:26:25.600 CC test/nvme/reserve/reserve.o 00:26:25.600 CXX test/cpp_headers/crc16.o 00:26:25.600 LINK pci_ut 00:26:25.859 LINK err_injection 00:26:25.859 LINK dif 00:26:25.859 LINK startup 00:26:25.859 CC test/nvme/simple_copy/simple_copy.o 00:26:25.859 CXX test/cpp_headers/crc32.o 00:26:25.859 LINK reserve 00:26:25.859 CXX test/cpp_headers/crc64.o 00:26:25.859 CXX test/cpp_headers/dif.o 00:26:25.859 CXX test/cpp_headers/dma.o 00:26:25.859 CC test/nvme/connect_stress/connect_stress.o 00:26:26.117 CC test/nvme/boot_partition/boot_partition.o 00:26:26.117 CXX test/cpp_headers/endian.o 00:26:26.117 CXX test/cpp_headers/env_dpdk.o 00:26:26.117 LINK simple_copy 00:26:26.117 LINK bdevperf 00:26:26.117 LINK memory_ut 00:26:26.117 LINK connect_stress 00:26:26.117 CC test/nvme/compliance/nvme_compliance.o 00:26:26.117 LINK boot_partition 00:26:26.117 CXX test/cpp_headers/env.o 00:26:26.117 CXX test/cpp_headers/event.o 00:26:26.117 CXX test/cpp_headers/fd_group.o 00:26:26.117 CC test/bdev/bdevio/bdevio.o 00:26:26.375 CXX test/cpp_headers/fd.o 00:26:26.375 CC test/nvme/fused_ordering/fused_ordering.o 00:26:26.375 CXX test/cpp_headers/file.o 00:26:26.375 CXX test/cpp_headers/fsdev.o 00:26:26.375 CXX test/cpp_headers/fsdev_module.o 00:26:26.375 CC test/nvme/doorbell_aers/doorbell_aers.o 00:26:26.375 CC test/nvme/fdp/fdp.o 00:26:26.375 CC examples/nvmf/nvmf/nvmf.o 00:26:26.375 LINK nvme_compliance 00:26:26.633 LINK fused_ordering 00:26:26.633 CXX test/cpp_headers/ftl.o 00:26:26.633 CXX test/cpp_headers/fuse_dispatcher.o 00:26:26.633 CXX test/cpp_headers/gpt_spec.o 00:26:26.633 LINK doorbell_aers 00:26:26.633 CXX test/cpp_headers/hexlify.o 00:26:26.633 LINK bdevio 00:26:26.633 CXX test/cpp_headers/histogram_data.o 00:26:26.633 CXX test/cpp_headers/idxd.o 00:26:26.633 CC test/nvme/cuse/cuse.o 00:26:26.633 CXX test/cpp_headers/idxd_spec.o 00:26:26.633 CXX test/cpp_headers/init.o 00:26:26.633 CXX test/cpp_headers/ioat.o 00:26:26.633 LINK nvmf 00:26:26.890 LINK fdp 00:26:26.890 CXX test/cpp_headers/ioat_spec.o 00:26:26.890 CXX test/cpp_headers/iscsi_spec.o 00:26:26.890 CXX test/cpp_headers/json.o 00:26:26.890 CXX test/cpp_headers/jsonrpc.o 00:26:26.890 CXX test/cpp_headers/keyring.o 00:26:26.890 CXX test/cpp_headers/keyring_module.o 00:26:26.891 CXX test/cpp_headers/likely.o 00:26:26.891 CXX test/cpp_headers/log.o 00:26:26.891 CXX test/cpp_headers/lvol.o 00:26:26.891 CXX test/cpp_headers/md5.o 00:26:26.891 CXX test/cpp_headers/memory.o 00:26:26.891 CXX test/cpp_headers/mmio.o 00:26:26.891 CXX test/cpp_headers/nbd.o 00:26:27.148 CXX test/cpp_headers/net.o 00:26:27.148 CXX test/cpp_headers/notify.o 00:26:27.148 CXX test/cpp_headers/nvme.o 00:26:27.148 CXX test/cpp_headers/nvme_intel.o 00:26:27.148 CXX test/cpp_headers/nvme_ocssd.o 00:26:27.148 CXX test/cpp_headers/nvme_ocssd_spec.o 00:26:27.148 CXX test/cpp_headers/nvme_spec.o 00:26:27.148 CXX test/cpp_headers/nvme_zns.o 00:26:27.148 CXX test/cpp_headers/nvmf_cmd.o 00:26:27.148 CXX test/cpp_headers/nvmf_fc_spec.o 00:26:27.148 CXX test/cpp_headers/nvmf.o 00:26:27.148 CXX test/cpp_headers/nvmf_spec.o 00:26:27.148 CXX test/cpp_headers/nvmf_transport.o 00:26:27.148 CXX test/cpp_headers/opal.o 00:26:27.405 CXX test/cpp_headers/opal_spec.o 00:26:27.405 CXX test/cpp_headers/pci_ids.o 00:26:27.405 CXX test/cpp_headers/pipe.o 00:26:27.405 CXX test/cpp_headers/queue.o 00:26:27.405 CXX test/cpp_headers/reduce.o 00:26:27.405 CXX test/cpp_headers/rpc.o 00:26:27.405 CXX test/cpp_headers/scheduler.o 00:26:27.405 CXX test/cpp_headers/scsi.o 00:26:27.405 CXX test/cpp_headers/scsi_spec.o 00:26:27.405 CXX test/cpp_headers/sock.o 00:26:27.405 CXX test/cpp_headers/stdinc.o 00:26:27.405 CXX test/cpp_headers/string.o 00:26:27.405 CXX test/cpp_headers/thread.o 00:26:27.405 CXX test/cpp_headers/trace.o 00:26:27.405 CXX test/cpp_headers/trace_parser.o 00:26:27.405 CXX test/cpp_headers/tree.o 00:26:27.663 CXX test/cpp_headers/ublk.o 00:26:27.663 CXX test/cpp_headers/util.o 00:26:27.663 CXX test/cpp_headers/uuid.o 00:26:27.663 CXX test/cpp_headers/version.o 00:26:27.663 CXX test/cpp_headers/vfio_user_pci.o 00:26:27.663 CXX test/cpp_headers/vfio_user_spec.o 00:26:27.663 CXX test/cpp_headers/vhost.o 00:26:27.663 CXX test/cpp_headers/vmd.o 00:26:27.663 CXX test/cpp_headers/xor.o 00:26:27.663 CXX test/cpp_headers/zipf.o 00:26:27.921 LINK cuse 00:26:29.817 LINK esnap 00:26:30.075 00:26:30.075 real 1m7.727s 00:26:30.075 user 6m24.321s 00:26:30.075 sys 1m7.028s 00:26:30.075 15:51:51 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:26:30.075 15:51:51 make -- common/autotest_common.sh@10 -- $ set +x 00:26:30.075 ************************************ 00:26:30.075 END TEST make 00:26:30.075 ************************************ 00:26:30.075 15:51:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:26:30.075 15:51:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:30.075 15:51:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:30.075 15:51:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:30.075 15:51:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:30.075 15:51:51 -- pm/common@44 -- $ pid=5081 00:26:30.075 15:51:51 -- pm/common@50 -- $ kill -TERM 5081 00:26:30.075 15:51:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:30.075 15:51:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:30.075 15:51:51 -- pm/common@44 -- $ pid=5082 00:26:30.075 15:51:51 -- pm/common@50 -- $ kill -TERM 5082 00:26:30.075 15:51:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:26:30.075 15:51:51 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:30.333 15:51:51 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:30.333 15:51:51 -- common/autotest_common.sh@1691 -- # lcov --version 00:26:30.333 15:51:51 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:30.333 15:51:51 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:30.333 15:51:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.333 15:51:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.333 15:51:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.333 15:51:51 -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.333 15:51:51 -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.333 15:51:51 -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.333 15:51:51 -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.333 15:51:51 -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.333 15:51:51 -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.333 15:51:51 -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.333 15:51:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.333 15:51:51 -- scripts/common.sh@344 -- # case "$op" in 00:26:30.333 15:51:51 -- scripts/common.sh@345 -- # : 1 00:26:30.333 15:51:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.333 15:51:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.333 15:51:51 -- scripts/common.sh@365 -- # decimal 1 00:26:30.333 15:51:51 -- scripts/common.sh@353 -- # local d=1 00:26:30.333 15:51:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.333 15:51:51 -- scripts/common.sh@355 -- # echo 1 00:26:30.333 15:51:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.333 15:51:51 -- scripts/common.sh@366 -- # decimal 2 00:26:30.333 15:51:51 -- scripts/common.sh@353 -- # local d=2 00:26:30.333 15:51:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.333 15:51:51 -- scripts/common.sh@355 -- # echo 2 00:26:30.334 15:51:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.334 15:51:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.334 15:51:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.334 15:51:51 -- scripts/common.sh@368 -- # return 0 00:26:30.334 15:51:51 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.334 15:51:51 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:30.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.334 --rc genhtml_branch_coverage=1 00:26:30.334 --rc genhtml_function_coverage=1 00:26:30.334 --rc genhtml_legend=1 00:26:30.334 --rc geninfo_all_blocks=1 00:26:30.334 --rc geninfo_unexecuted_blocks=1 00:26:30.334 00:26:30.334 ' 00:26:30.334 15:51:51 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:30.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.334 --rc genhtml_branch_coverage=1 00:26:30.334 --rc genhtml_function_coverage=1 00:26:30.334 --rc genhtml_legend=1 00:26:30.334 --rc geninfo_all_blocks=1 00:26:30.334 --rc geninfo_unexecuted_blocks=1 00:26:30.334 00:26:30.334 ' 00:26:30.334 15:51:51 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:30.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.334 --rc genhtml_branch_coverage=1 00:26:30.334 --rc genhtml_function_coverage=1 00:26:30.334 --rc genhtml_legend=1 00:26:30.334 --rc geninfo_all_blocks=1 00:26:30.334 --rc geninfo_unexecuted_blocks=1 00:26:30.334 00:26:30.334 ' 00:26:30.334 15:51:51 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:30.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.334 --rc genhtml_branch_coverage=1 00:26:30.334 --rc genhtml_function_coverage=1 00:26:30.334 --rc genhtml_legend=1 00:26:30.334 --rc geninfo_all_blocks=1 00:26:30.334 --rc geninfo_unexecuted_blocks=1 00:26:30.334 00:26:30.334 ' 00:26:30.334 15:51:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:30.334 15:51:51 -- nvmf/common.sh@7 -- # uname -s 00:26:30.334 15:51:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.334 15:51:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.334 15:51:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.334 15:51:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.334 15:51:51 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.334 15:51:51 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:30.334 15:51:51 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.334 15:51:51 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:30.334 15:51:51 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1f363de5-7a80-42b1-b2e8-064deed1963e 00:26:30.334 15:51:51 -- nvmf/common.sh@16 -- # NVME_HOSTID=1f363de5-7a80-42b1-b2e8-064deed1963e 00:26:30.334 15:51:51 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.334 15:51:51 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:30.334 15:51:51 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:26:30.334 15:51:51 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.334 15:51:51 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:30.334 15:51:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.334 15:51:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.334 15:51:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.334 15:51:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.334 15:51:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.334 15:51:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.334 15:51:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.334 15:51:51 -- paths/export.sh@5 -- # export PATH 00:26:30.334 15:51:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.334 15:51:51 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:30.334 15:51:51 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:30.334 15:51:51 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:30.334 15:51:51 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:30.334 15:51:51 -- nvmf/common.sh@50 -- # : 0 00:26:30.334 15:51:51 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:30.334 15:51:51 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:30.334 15:51:51 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:30.334 15:51:51 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.334 15:51:51 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.334 15:51:51 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:30.334 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:30.334 15:51:51 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:30.334 15:51:51 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:30.334 15:51:51 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:30.334 15:51:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:26:30.334 15:51:51 -- spdk/autotest.sh@32 -- # uname -s 00:26:30.334 15:51:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:26:30.334 15:51:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:26:30.334 15:51:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:30.334 15:51:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:26:30.334 15:51:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:30.334 15:51:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:26:30.334 15:51:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:26:30.334 15:51:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:26:30.334 15:51:51 -- spdk/autotest.sh@48 -- # udevadm_pid=54251 00:26:30.334 15:51:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:26:30.334 15:51:51 -- pm/common@17 -- # local monitor 00:26:30.334 15:51:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:30.334 15:51:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:26:30.334 15:51:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:30.334 15:51:51 -- pm/common@25 -- # sleep 1 00:26:30.334 15:51:51 -- pm/common@21 -- # date +%s 00:26:30.334 15:51:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730821911 00:26:30.334 15:51:51 -- pm/common@21 -- # date +%s 00:26:30.334 15:51:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730821911 00:26:30.334 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730821911_collect-vmstat.pm.log 00:26:30.334 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730821911_collect-cpu-load.pm.log 00:26:31.267 15:51:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:26:31.267 15:51:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:26:31.267 15:51:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.267 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:26:31.267 15:51:52 -- spdk/autotest.sh@59 -- # create_test_list 00:26:31.267 15:51:52 -- common/autotest_common.sh@750 -- # xtrace_disable 00:26:31.267 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:26:31.525 15:51:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:26:31.525 15:51:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:26:31.525 15:51:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:26:31.525 15:51:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:26:31.525 15:51:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:26:31.525 15:51:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:26:31.525 15:51:52 -- common/autotest_common.sh@1455 -- # uname 00:26:31.525 15:51:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:26:31.525 15:51:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:26:31.525 15:51:52 -- common/autotest_common.sh@1475 -- # uname 00:26:31.525 15:51:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:26:31.525 15:51:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:26:31.525 15:51:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:26:31.525 lcov: LCOV version 1.15 00:26:31.525 15:51:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:26:46.442 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:26:46.442 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:27:01.356 15:52:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:27:01.356 15:52:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.356 15:52:20 -- common/autotest_common.sh@10 -- # set +x 00:27:01.356 15:52:20 -- spdk/autotest.sh@78 -- # rm -f 00:27:01.356 15:52:20 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:01.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:01.356 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:01.356 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:01.356 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:01.356 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:01.356 15:52:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:27:01.356 15:52:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:27:01.356 15:52:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:27:01.356 15:52:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:27:01.356 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.356 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:27:01.356 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:01.356 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:01.356 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.356 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.356 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:27:01.356 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:01.356 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:01.356 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.356 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.356 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:27:01.356 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:27:01.356 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:27:01.356 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.357 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:27:01.357 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:27:01.357 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.357 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:27:01.357 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:27:01.357 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.357 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:27:01.357 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:27:01.357 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:01.357 15:52:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:27:01.357 15:52:21 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:27:01.357 15:52:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:27:01.357 15:52:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:27:01.357 15:52:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:01.357 15:52:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:27:01.357 15:52:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:27:01.357 15:52:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:27:01.357 No valid GPT data, bailing 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # pt= 00:27:01.357 15:52:21 -- scripts/common.sh@395 -- # return 1 00:27:01.357 15:52:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:27:01.357 1+0 records in 00:27:01.357 1+0 records out 00:27:01.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102265 s, 103 MB/s 00:27:01.357 15:52:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:01.357 15:52:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:27:01.357 15:52:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:27:01.357 15:52:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:27:01.357 No valid GPT data, bailing 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # pt= 00:27:01.357 15:52:21 -- scripts/common.sh@395 -- # return 1 00:27:01.357 15:52:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:27:01.357 1+0 records in 00:27:01.357 1+0 records out 00:27:01.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00305369 s, 343 MB/s 00:27:01.357 15:52:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:01.357 15:52:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:27:01.357 15:52:21 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:27:01.357 15:52:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:27:01.357 No valid GPT data, bailing 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # pt= 00:27:01.357 15:52:21 -- scripts/common.sh@395 -- # return 1 00:27:01.357 15:52:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:27:01.357 1+0 records in 00:27:01.357 1+0 records out 00:27:01.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414737 s, 253 MB/s 00:27:01.357 15:52:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:01.357 15:52:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:27:01.357 15:52:21 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:27:01.357 15:52:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:27:01.357 No valid GPT data, bailing 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # pt= 00:27:01.357 15:52:21 -- scripts/common.sh@395 -- # return 1 00:27:01.357 15:52:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:27:01.357 1+0 records in 00:27:01.357 1+0 records out 00:27:01.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433857 s, 242 MB/s 00:27:01.357 15:52:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:01.357 15:52:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:27:01.357 15:52:21 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:27:01.357 15:52:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:27:01.357 No valid GPT data, bailing 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # pt= 00:27:01.357 15:52:21 -- scripts/common.sh@395 -- # return 1 00:27:01.357 15:52:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:27:01.357 1+0 records in 00:27:01.357 1+0 records out 00:27:01.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381019 s, 275 MB/s 00:27:01.357 15:52:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:01.357 15:52:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:01.357 15:52:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:27:01.357 15:52:21 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:27:01.357 15:52:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:27:01.357 No valid GPT data, bailing 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:27:01.357 15:52:21 -- scripts/common.sh@394 -- # pt= 00:27:01.357 15:52:21 -- scripts/common.sh@395 -- # return 1 00:27:01.357 15:52:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:27:01.357 1+0 records in 00:27:01.357 1+0 records out 00:27:01.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443542 s, 236 MB/s 00:27:01.357 15:52:21 -- spdk/autotest.sh@105 -- # sync 00:27:01.357 15:52:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:27:01.357 15:52:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:27:01.357 15:52:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:27:02.731 15:52:23 -- spdk/autotest.sh@111 -- # uname -s 00:27:02.731 15:52:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:27:02.731 15:52:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:27:02.731 15:52:23 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:27:02.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.247 Hugepages 00:27:03.247 node hugesize free / total 00:27:03.247 node0 1048576kB 0 / 0 00:27:03.247 node0 2048kB 0 / 0 00:27:03.247 00:27:03.247 Type BDF Vendor Device NUMA Driver Device Block devices 00:27:03.247 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:27:03.247 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:27:03.505 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:27:03.505 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:27:03.505 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:27:03.505 15:52:24 -- spdk/autotest.sh@117 -- # uname -s 00:27:03.505 15:52:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:27:03.505 15:52:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:27:03.505 15:52:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:04.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:04.329 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:04.330 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:04.330 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:04.330 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:04.587 15:52:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:27:05.521 15:52:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:27:05.521 15:52:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:27:05.521 15:52:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:27:05.521 15:52:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:27:05.521 15:52:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:05.521 15:52:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:27:05.521 15:52:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:05.521 15:52:26 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:05.521 15:52:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:05.521 15:52:26 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:27:05.521 15:52:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:05.521 15:52:26 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:05.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:06.036 Waiting for block devices as requested 00:27:06.036 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:06.036 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:06.036 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:06.293 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:11.564 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:11.564 15:52:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:11.564 15:52:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:27:11.564 15:52:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:27:11.564 15:52:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1541 -- # continue 00:27:11.564 15:52:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:27:11.564 15:52:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1541 -- # continue 00:27:11.564 15:52:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:27:11.564 15:52:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1541 -- # continue 00:27:11.564 15:52:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:27:11.564 15:52:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:27:11.564 15:52:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:27:11.564 15:52:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:27:11.564 15:52:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:27:11.564 15:52:32 -- common/autotest_common.sh@1541 -- # continue 00:27:11.564 15:52:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:27:11.564 15:52:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:11.564 15:52:32 -- common/autotest_common.sh@10 -- # set +x 00:27:11.564 15:52:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:27:11.564 15:52:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.564 15:52:32 -- common/autotest_common.sh@10 -- # set +x 00:27:11.564 15:52:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:11.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:12.081 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:12.081 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:12.360 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:12.360 15:52:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:27:12.360 15:52:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.360 15:52:33 -- common/autotest_common.sh@10 -- # set +x 00:27:12.360 15:52:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:27:12.360 15:52:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:27:12.360 15:52:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:27:12.360 15:52:33 -- common/autotest_common.sh@1561 -- # bdfs=() 00:27:12.360 15:52:33 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:27:12.360 15:52:33 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:27:12.360 15:52:33 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:27:12.360 15:52:33 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:27:12.360 15:52:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:12.360 15:52:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:27:12.360 15:52:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:12.360 15:52:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:12.360 15:52:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:12.360 15:52:33 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:27:12.360 15:52:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:12.360 15:52:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:27:12.360 15:52:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:12.360 15:52:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:27:12.360 15:52:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:12.360 15:52:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:27:12.360 15:52:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:12.360 15:52:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:27:12.360 15:52:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:27:12.360 15:52:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:12.360 15:52:33 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:27:12.360 15:52:33 -- common/autotest_common.sh@1570 -- # return 0 00:27:12.360 15:52:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:27:12.360 15:52:33 -- common/autotest_common.sh@1578 -- # return 0 00:27:12.360 15:52:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:27:12.360 15:52:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:27:12.360 15:52:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:12.360 15:52:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:12.360 15:52:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:27:12.360 15:52:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:12.360 15:52:33 -- common/autotest_common.sh@10 -- # set +x 00:27:12.360 15:52:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:27:12.360 15:52:33 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:12.360 15:52:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:12.360 15:52:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:12.360 15:52:33 -- common/autotest_common.sh@10 -- # set +x 00:27:12.360 ************************************ 00:27:12.360 START TEST env 00:27:12.360 ************************************ 00:27:12.360 15:52:33 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:12.360 * Looking for test storage... 00:27:12.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:27:12.360 15:52:33 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:12.360 15:52:33 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:12.360 15:52:33 env -- common/autotest_common.sh@1691 -- # lcov --version 00:27:12.619 15:52:33 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:12.619 15:52:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.619 15:52:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.619 15:52:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.619 15:52:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.619 15:52:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.619 15:52:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.619 15:52:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.619 15:52:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.619 15:52:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.619 15:52:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.619 15:52:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.619 15:52:33 env -- scripts/common.sh@344 -- # case "$op" in 00:27:12.619 15:52:33 env -- scripts/common.sh@345 -- # : 1 00:27:12.619 15:52:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.619 15:52:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.619 15:52:33 env -- scripts/common.sh@365 -- # decimal 1 00:27:12.619 15:52:33 env -- scripts/common.sh@353 -- # local d=1 00:27:12.619 15:52:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.619 15:52:33 env -- scripts/common.sh@355 -- # echo 1 00:27:12.619 15:52:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.619 15:52:33 env -- scripts/common.sh@366 -- # decimal 2 00:27:12.619 15:52:33 env -- scripts/common.sh@353 -- # local d=2 00:27:12.619 15:52:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.619 15:52:33 env -- scripts/common.sh@355 -- # echo 2 00:27:12.619 15:52:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.619 15:52:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.619 15:52:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.620 15:52:33 env -- scripts/common.sh@368 -- # return 0 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:12.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.620 --rc genhtml_branch_coverage=1 00:27:12.620 --rc genhtml_function_coverage=1 00:27:12.620 --rc genhtml_legend=1 00:27:12.620 --rc geninfo_all_blocks=1 00:27:12.620 --rc geninfo_unexecuted_blocks=1 00:27:12.620 00:27:12.620 ' 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:12.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.620 --rc genhtml_branch_coverage=1 00:27:12.620 --rc genhtml_function_coverage=1 00:27:12.620 --rc genhtml_legend=1 00:27:12.620 --rc geninfo_all_blocks=1 00:27:12.620 --rc geninfo_unexecuted_blocks=1 00:27:12.620 00:27:12.620 ' 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:12.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.620 --rc genhtml_branch_coverage=1 00:27:12.620 --rc genhtml_function_coverage=1 00:27:12.620 --rc genhtml_legend=1 00:27:12.620 --rc geninfo_all_blocks=1 00:27:12.620 --rc geninfo_unexecuted_blocks=1 00:27:12.620 00:27:12.620 ' 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:12.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.620 --rc genhtml_branch_coverage=1 00:27:12.620 --rc genhtml_function_coverage=1 00:27:12.620 --rc genhtml_legend=1 00:27:12.620 --rc geninfo_all_blocks=1 00:27:12.620 --rc geninfo_unexecuted_blocks=1 00:27:12.620 00:27:12.620 ' 00:27:12.620 15:52:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:12.620 15:52:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:12.620 15:52:33 env -- common/autotest_common.sh@10 -- # set +x 00:27:12.620 ************************************ 00:27:12.620 START TEST env_memory 00:27:12.620 ************************************ 00:27:12.620 15:52:33 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:12.620 00:27:12.620 00:27:12.620 CUnit - A unit testing framework for C - Version 2.1-3 00:27:12.620 http://cunit.sourceforge.net/ 00:27:12.620 00:27:12.620 00:27:12.620 Suite: memory 00:27:12.620 Test: alloc and free memory map ...[2024-11-05 15:52:33.837974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:27:12.620 passed 00:27:12.620 Test: mem map translation ...[2024-11-05 15:52:33.876784] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:27:12.620 [2024-11-05 15:52:33.876828] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:27:12.620 [2024-11-05 15:52:33.876888] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:27:12.620 [2024-11-05 15:52:33.876903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:27:12.620 passed 00:27:12.620 Test: mem map registration ...[2024-11-05 15:52:33.944928] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:27:12.620 [2024-11-05 15:52:33.944974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:27:12.620 passed 00:27:12.879 Test: mem map adjacent registrations ...passed 00:27:12.879 00:27:12.879 Run Summary: Type Total Ran Passed Failed Inactive 00:27:12.879 suites 1 1 n/a 0 0 00:27:12.879 tests 4 4 4 0 0 00:27:12.879 asserts 152 152 152 0 n/a 00:27:12.879 00:27:12.879 Elapsed time = 0.233 seconds 00:27:12.879 00:27:12.879 real 0m0.261s 00:27:12.879 user 0m0.240s 00:27:12.879 sys 0m0.014s 00:27:12.879 15:52:34 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:12.879 15:52:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:27:12.879 ************************************ 00:27:12.879 END TEST env_memory 00:27:12.879 ************************************ 00:27:12.879 15:52:34 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:12.879 15:52:34 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:12.879 15:52:34 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:12.879 15:52:34 env -- common/autotest_common.sh@10 -- # set +x 00:27:12.879 ************************************ 00:27:12.879 START TEST env_vtophys 00:27:12.879 ************************************ 00:27:12.879 15:52:34 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:12.879 EAL: lib.eal log level changed from notice to debug 00:27:12.879 EAL: Detected lcore 0 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 1 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 2 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 3 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 4 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 5 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 6 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 7 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 8 as core 0 on socket 0 00:27:12.879 EAL: Detected lcore 9 as core 0 on socket 0 00:27:12.879 EAL: Maximum logical cores by configuration: 128 00:27:12.879 EAL: Detected CPU lcores: 10 00:27:12.879 EAL: Detected NUMA nodes: 1 00:27:12.879 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:27:12.879 EAL: Detected shared linkage of DPDK 00:27:12.879 EAL: No shared files mode enabled, IPC will be disabled 00:27:12.879 EAL: Selected IOVA mode 'PA' 00:27:12.879 EAL: Probing VFIO support... 00:27:12.879 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:12.879 EAL: VFIO modules not loaded, skipping VFIO support... 00:27:12.879 EAL: Ask a virtual area of 0x2e000 bytes 00:27:12.879 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:27:12.879 EAL: Setting up physically contiguous memory... 00:27:12.879 EAL: Setting maximum number of open files to 524288 00:27:12.879 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:27:12.879 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:27:12.879 EAL: Ask a virtual area of 0x61000 bytes 00:27:12.879 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:27:12.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:12.879 EAL: Ask a virtual area of 0x400000000 bytes 00:27:12.879 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:27:12.879 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:27:12.879 EAL: Ask a virtual area of 0x61000 bytes 00:27:12.879 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:27:12.879 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:12.879 EAL: Ask a virtual area of 0x400000000 bytes 00:27:12.879 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:27:12.880 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:27:12.880 EAL: Ask a virtual area of 0x61000 bytes 00:27:12.880 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:27:12.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:12.880 EAL: Ask a virtual area of 0x400000000 bytes 00:27:12.880 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:27:12.880 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:27:12.880 EAL: Ask a virtual area of 0x61000 bytes 00:27:12.880 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:27:12.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:12.880 EAL: Ask a virtual area of 0x400000000 bytes 00:27:12.880 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:27:12.880 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:27:12.880 EAL: Hugepages will be freed exactly as allocated. 00:27:12.880 EAL: No shared files mode enabled, IPC is disabled 00:27:12.880 EAL: No shared files mode enabled, IPC is disabled 00:27:12.880 EAL: TSC frequency is ~2600000 KHz 00:27:12.880 EAL: Main lcore 0 is ready (tid=7f012c8c4a40;cpuset=[0]) 00:27:12.880 EAL: Trying to obtain current memory policy. 00:27:12.880 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.148 EAL: Restoring previous memory policy: 0 00:27:13.148 EAL: request: mp_malloc_sync 00:27:13.148 EAL: No shared files mode enabled, IPC is disabled 00:27:13.148 EAL: Heap on socket 0 was expanded by 2MB 00:27:13.148 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:13.148 EAL: No PCI address specified using 'addr=' in: bus=pci 00:27:13.148 EAL: Mem event callback 'spdk:(nil)' registered 00:27:13.148 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:27:13.148 00:27:13.148 00:27:13.148 CUnit - A unit testing framework for C - Version 2.1-3 00:27:13.148 http://cunit.sourceforge.net/ 00:27:13.148 00:27:13.148 00:27:13.148 Suite: components_suite 00:27:13.433 Test: vtophys_malloc_test ...passed 00:27:13.433 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:27:13.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.433 EAL: Restoring previous memory policy: 4 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was expanded by 4MB 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was shrunk by 4MB 00:27:13.433 EAL: Trying to obtain current memory policy. 00:27:13.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.433 EAL: Restoring previous memory policy: 4 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was expanded by 6MB 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was shrunk by 6MB 00:27:13.433 EAL: Trying to obtain current memory policy. 00:27:13.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.433 EAL: Restoring previous memory policy: 4 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was expanded by 10MB 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was shrunk by 10MB 00:27:13.433 EAL: Trying to obtain current memory policy. 00:27:13.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.433 EAL: Restoring previous memory policy: 4 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was expanded by 18MB 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was shrunk by 18MB 00:27:13.433 EAL: Trying to obtain current memory policy. 00:27:13.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.433 EAL: Restoring previous memory policy: 4 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was expanded by 34MB 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was shrunk by 34MB 00:27:13.433 EAL: Trying to obtain current memory policy. 00:27:13.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.433 EAL: Restoring previous memory policy: 4 00:27:13.433 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.433 EAL: request: mp_malloc_sync 00:27:13.433 EAL: No shared files mode enabled, IPC is disabled 00:27:13.433 EAL: Heap on socket 0 was expanded by 66MB 00:27:13.690 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.690 EAL: request: mp_malloc_sync 00:27:13.690 EAL: No shared files mode enabled, IPC is disabled 00:27:13.690 EAL: Heap on socket 0 was shrunk by 66MB 00:27:13.690 EAL: Trying to obtain current memory policy. 00:27:13.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.690 EAL: Restoring previous memory policy: 4 00:27:13.690 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.690 EAL: request: mp_malloc_sync 00:27:13.690 EAL: No shared files mode enabled, IPC is disabled 00:27:13.690 EAL: Heap on socket 0 was expanded by 130MB 00:27:13.690 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.690 EAL: request: mp_malloc_sync 00:27:13.690 EAL: No shared files mode enabled, IPC is disabled 00:27:13.690 EAL: Heap on socket 0 was shrunk by 130MB 00:27:13.948 EAL: Trying to obtain current memory policy. 00:27:13.948 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:13.948 EAL: Restoring previous memory policy: 4 00:27:13.948 EAL: Calling mem event callback 'spdk:(nil)' 00:27:13.948 EAL: request: mp_malloc_sync 00:27:13.948 EAL: No shared files mode enabled, IPC is disabled 00:27:13.948 EAL: Heap on socket 0 was expanded by 258MB 00:27:14.204 EAL: Calling mem event callback 'spdk:(nil)' 00:27:14.204 EAL: request: mp_malloc_sync 00:27:14.204 EAL: No shared files mode enabled, IPC is disabled 00:27:14.205 EAL: Heap on socket 0 was shrunk by 258MB 00:27:14.462 EAL: Trying to obtain current memory policy. 00:27:14.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:14.719 EAL: Restoring previous memory policy: 4 00:27:14.719 EAL: Calling mem event callback 'spdk:(nil)' 00:27:14.719 EAL: request: mp_malloc_sync 00:27:14.719 EAL: No shared files mode enabled, IPC is disabled 00:27:14.719 EAL: Heap on socket 0 was expanded by 514MB 00:27:15.328 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.328 EAL: request: mp_malloc_sync 00:27:15.328 EAL: No shared files mode enabled, IPC is disabled 00:27:15.328 EAL: Heap on socket 0 was shrunk by 514MB 00:27:15.895 EAL: Trying to obtain current memory policy. 00:27:15.895 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:15.895 EAL: Restoring previous memory policy: 4 00:27:15.895 EAL: Calling mem event callback 'spdk:(nil)' 00:27:15.895 EAL: request: mp_malloc_sync 00:27:15.895 EAL: No shared files mode enabled, IPC is disabled 00:27:15.895 EAL: Heap on socket 0 was expanded by 1026MB 00:27:17.284 EAL: Calling mem event callback 'spdk:(nil)' 00:27:17.284 EAL: request: mp_malloc_sync 00:27:17.284 EAL: No shared files mode enabled, IPC is disabled 00:27:17.284 EAL: Heap on socket 0 was shrunk by 1026MB 00:27:18.263 passed 00:27:18.263 00:27:18.263 Run Summary: Type Total Ran Passed Failed Inactive 00:27:18.263 suites 1 1 n/a 0 0 00:27:18.263 tests 2 2 2 0 0 00:27:18.263 asserts 5810 5810 5810 0 n/a 00:27:18.263 00:27:18.263 Elapsed time = 5.051 seconds 00:27:18.263 EAL: Calling mem event callback 'spdk:(nil)' 00:27:18.263 EAL: request: mp_malloc_sync 00:27:18.263 EAL: No shared files mode enabled, IPC is disabled 00:27:18.263 EAL: Heap on socket 0 was shrunk by 2MB 00:27:18.263 EAL: No shared files mode enabled, IPC is disabled 00:27:18.263 EAL: No shared files mode enabled, IPC is disabled 00:27:18.263 EAL: No shared files mode enabled, IPC is disabled 00:27:18.263 00:27:18.263 real 0m5.300s 00:27:18.263 user 0m4.518s 00:27:18.263 sys 0m0.636s 00:27:18.263 15:52:39 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.263 ************************************ 00:27:18.263 END TEST env_vtophys 00:27:18.263 ************************************ 00:27:18.263 15:52:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:27:18.263 15:52:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:18.263 15:52:39 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:18.263 15:52:39 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.263 15:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:27:18.263 ************************************ 00:27:18.263 START TEST env_pci 00:27:18.263 ************************************ 00:27:18.263 15:52:39 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:18.263 00:27:18.263 00:27:18.263 CUnit - A unit testing framework for C - Version 2.1-3 00:27:18.263 http://cunit.sourceforge.net/ 00:27:18.263 00:27:18.263 00:27:18.263 Suite: pci 00:27:18.263 Test: pci_hook ...[2024-11-05 15:52:39.451524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56992 has claimed it 00:27:18.263 EAL: Cannot find device (10000:00:01.0) 00:27:18.263 EAL: Failed to attach device on primary process 00:27:18.263 passed 00:27:18.263 00:27:18.263 Run Summary: Type Total Ran Passed Failed Inactive 00:27:18.263 suites 1 1 n/a 0 0 00:27:18.263 tests 1 1 1 0 0 00:27:18.263 asserts 25 25 25 0 n/a 00:27:18.263 00:27:18.263 Elapsed time = 0.007 seconds 00:27:18.263 00:27:18.263 real 0m0.058s 00:27:18.263 user 0m0.027s 00:27:18.263 sys 0m0.030s 00:27:18.264 15:52:39 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.264 15:52:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:27:18.264 ************************************ 00:27:18.264 END TEST env_pci 00:27:18.264 ************************************ 00:27:18.264 15:52:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:27:18.264 15:52:39 env -- env/env.sh@15 -- # uname 00:27:18.264 15:52:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:27:18.264 15:52:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:27:18.264 15:52:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:18.264 15:52:39 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:27:18.264 15:52:39 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.264 15:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:27:18.264 ************************************ 00:27:18.264 START TEST env_dpdk_post_init 00:27:18.264 ************************************ 00:27:18.264 15:52:39 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:18.264 EAL: Detected CPU lcores: 10 00:27:18.264 EAL: Detected NUMA nodes: 1 00:27:18.264 EAL: Detected shared linkage of DPDK 00:27:18.264 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:18.264 EAL: Selected IOVA mode 'PA' 00:27:18.522 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:18.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:27:18.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:27:18.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:27:18.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:27:18.522 Starting DPDK initialization... 00:27:18.522 Starting SPDK post initialization... 00:27:18.522 SPDK NVMe probe 00:27:18.522 Attaching to 0000:00:10.0 00:27:18.522 Attaching to 0000:00:11.0 00:27:18.522 Attaching to 0000:00:12.0 00:27:18.522 Attaching to 0000:00:13.0 00:27:18.522 Attached to 0000:00:13.0 00:27:18.522 Attached to 0000:00:10.0 00:27:18.522 Attached to 0000:00:11.0 00:27:18.522 Attached to 0000:00:12.0 00:27:18.522 Cleaning up... 00:27:18.522 00:27:18.522 real 0m0.249s 00:27:18.522 user 0m0.070s 00:27:18.522 sys 0m0.082s 00:27:18.522 15:52:39 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.522 15:52:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:27:18.522 ************************************ 00:27:18.522 END TEST env_dpdk_post_init 00:27:18.522 ************************************ 00:27:18.522 15:52:39 env -- env/env.sh@26 -- # uname 00:27:18.522 15:52:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:27:18.522 15:52:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:18.522 15:52:39 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:18.522 15:52:39 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.522 15:52:39 env -- common/autotest_common.sh@10 -- # set +x 00:27:18.522 ************************************ 00:27:18.522 START TEST env_mem_callbacks 00:27:18.522 ************************************ 00:27:18.522 15:52:39 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:18.522 EAL: Detected CPU lcores: 10 00:27:18.522 EAL: Detected NUMA nodes: 1 00:27:18.522 EAL: Detected shared linkage of DPDK 00:27:18.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:18.522 EAL: Selected IOVA mode 'PA' 00:27:18.780 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:18.780 00:27:18.780 00:27:18.780 CUnit - A unit testing framework for C - Version 2.1-3 00:27:18.780 http://cunit.sourceforge.net/ 00:27:18.780 00:27:18.780 00:27:18.780 Suite: memory 00:27:18.780 Test: test ... 00:27:18.780 register 0x200000200000 2097152 00:27:18.780 malloc 3145728 00:27:18.780 register 0x200000400000 4194304 00:27:18.780 buf 0x2000004fffc0 len 3145728 PASSED 00:27:18.780 malloc 64 00:27:18.780 buf 0x2000004ffec0 len 64 PASSED 00:27:18.780 malloc 4194304 00:27:18.780 register 0x200000800000 6291456 00:27:18.780 buf 0x2000009fffc0 len 4194304 PASSED 00:27:18.780 free 0x2000004fffc0 3145728 00:27:18.780 free 0x2000004ffec0 64 00:27:18.780 unregister 0x200000400000 4194304 PASSED 00:27:18.780 free 0x2000009fffc0 4194304 00:27:18.780 unregister 0x200000800000 6291456 PASSED 00:27:18.780 malloc 8388608 00:27:18.780 register 0x200000400000 10485760 00:27:18.780 buf 0x2000005fffc0 len 8388608 PASSED 00:27:18.780 free 0x2000005fffc0 8388608 00:27:18.780 unregister 0x200000400000 10485760 PASSED 00:27:18.780 passed 00:27:18.780 00:27:18.780 Run Summary: Type Total Ran Passed Failed Inactive 00:27:18.780 suites 1 1 n/a 0 0 00:27:18.780 tests 1 1 1 0 0 00:27:18.780 asserts 15 15 15 0 n/a 00:27:18.780 00:27:18.780 Elapsed time = 0.048 seconds 00:27:18.780 00:27:18.780 real 0m0.236s 00:27:18.780 user 0m0.073s 00:27:18.780 sys 0m0.061s 00:27:18.780 15:52:40 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.780 ************************************ 00:27:18.780 END TEST env_mem_callbacks 00:27:18.780 ************************************ 00:27:18.780 15:52:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:27:18.780 ************************************ 00:27:18.780 END TEST env 00:27:18.780 ************************************ 00:27:18.780 00:27:18.780 real 0m6.441s 00:27:18.780 user 0m5.082s 00:27:18.780 sys 0m1.004s 00:27:18.780 15:52:40 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.780 15:52:40 env -- common/autotest_common.sh@10 -- # set +x 00:27:18.780 15:52:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:18.780 15:52:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:18.780 15:52:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.780 15:52:40 -- common/autotest_common.sh@10 -- # set +x 00:27:18.780 ************************************ 00:27:18.780 START TEST rpc 00:27:18.780 ************************************ 00:27:18.780 15:52:40 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:19.039 * Looking for test storage... 00:27:19.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:19.039 15:52:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:19.039 15:52:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:19.039 15:52:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:19.039 15:52:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:19.039 15:52:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:19.039 15:52:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:19.039 15:52:40 rpc -- scripts/common.sh@345 -- # : 1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:19.039 15:52:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:19.039 15:52:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@353 -- # local d=1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:19.039 15:52:40 rpc -- scripts/common.sh@355 -- # echo 1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:19.039 15:52:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@353 -- # local d=2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:19.039 15:52:40 rpc -- scripts/common.sh@355 -- # echo 2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:19.039 15:52:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:19.039 15:52:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:19.039 15:52:40 rpc -- scripts/common.sh@368 -- # return 0 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.039 --rc genhtml_branch_coverage=1 00:27:19.039 --rc genhtml_function_coverage=1 00:27:19.039 --rc genhtml_legend=1 00:27:19.039 --rc geninfo_all_blocks=1 00:27:19.039 --rc geninfo_unexecuted_blocks=1 00:27:19.039 00:27:19.039 ' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.039 --rc genhtml_branch_coverage=1 00:27:19.039 --rc genhtml_function_coverage=1 00:27:19.039 --rc genhtml_legend=1 00:27:19.039 --rc geninfo_all_blocks=1 00:27:19.039 --rc geninfo_unexecuted_blocks=1 00:27:19.039 00:27:19.039 ' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.039 --rc genhtml_branch_coverage=1 00:27:19.039 --rc genhtml_function_coverage=1 00:27:19.039 --rc genhtml_legend=1 00:27:19.039 --rc geninfo_all_blocks=1 00:27:19.039 --rc geninfo_unexecuted_blocks=1 00:27:19.039 00:27:19.039 ' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:19.039 --rc genhtml_branch_coverage=1 00:27:19.039 --rc genhtml_function_coverage=1 00:27:19.039 --rc genhtml_legend=1 00:27:19.039 --rc geninfo_all_blocks=1 00:27:19.039 --rc geninfo_unexecuted_blocks=1 00:27:19.039 00:27:19.039 ' 00:27:19.039 15:52:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57119 00:27:19.039 15:52:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:19.039 15:52:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57119 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@833 -- # '[' -z 57119 ']' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:19.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:19.039 15:52:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:19.039 15:52:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:27:19.039 [2024-11-05 15:52:40.331490] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:19.039 [2024-11-05 15:52:40.331617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57119 ] 00:27:19.297 [2024-11-05 15:52:40.492990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.297 [2024-11-05 15:52:40.593562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:27:19.297 [2024-11-05 15:52:40.593622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57119' to capture a snapshot of events at runtime. 00:27:19.297 [2024-11-05 15:52:40.593632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.297 [2024-11-05 15:52:40.593641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.297 [2024-11-05 15:52:40.593649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57119 for offline analysis/debug. 00:27:19.297 [2024-11-05 15:52:40.594508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.864 15:52:41 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:19.864 15:52:41 rpc -- common/autotest_common.sh@866 -- # return 0 00:27:19.864 15:52:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:19.864 15:52:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:19.864 15:52:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:27:19.864 15:52:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:27:19.864 15:52:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:19.864 15:52:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:19.864 15:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:19.864 ************************************ 00:27:19.864 START TEST rpc_integrity 00:27:19.864 ************************************ 00:27:19.864 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:27:19.864 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:19.864 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.864 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:19.864 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.864 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:19.864 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:20.122 { 00:27:20.122 "name": "Malloc0", 00:27:20.122 "aliases": [ 00:27:20.122 "25327f1d-62e4-4077-9049-7931180ac2cb" 00:27:20.122 ], 00:27:20.122 "product_name": "Malloc disk", 00:27:20.122 "block_size": 512, 00:27:20.122 "num_blocks": 16384, 00:27:20.122 "uuid": "25327f1d-62e4-4077-9049-7931180ac2cb", 00:27:20.122 "assigned_rate_limits": { 00:27:20.122 "rw_ios_per_sec": 0, 00:27:20.122 "rw_mbytes_per_sec": 0, 00:27:20.122 "r_mbytes_per_sec": 0, 00:27:20.122 "w_mbytes_per_sec": 0 00:27:20.122 }, 00:27:20.122 "claimed": false, 00:27:20.122 "zoned": false, 00:27:20.122 "supported_io_types": { 00:27:20.122 "read": true, 00:27:20.122 "write": true, 00:27:20.122 "unmap": true, 00:27:20.122 "flush": true, 00:27:20.122 "reset": true, 00:27:20.122 "nvme_admin": false, 00:27:20.122 "nvme_io": false, 00:27:20.122 "nvme_io_md": false, 00:27:20.122 "write_zeroes": true, 00:27:20.122 "zcopy": true, 00:27:20.122 "get_zone_info": false, 00:27:20.122 "zone_management": false, 00:27:20.122 "zone_append": false, 00:27:20.122 "compare": false, 00:27:20.122 "compare_and_write": false, 00:27:20.122 "abort": true, 00:27:20.122 "seek_hole": false, 00:27:20.122 "seek_data": false, 00:27:20.122 "copy": true, 00:27:20.122 "nvme_iov_md": false 00:27:20.122 }, 00:27:20.122 "memory_domains": [ 00:27:20.122 { 00:27:20.122 "dma_device_id": "system", 00:27:20.122 "dma_device_type": 1 00:27:20.122 }, 00:27:20.122 { 00:27:20.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.122 "dma_device_type": 2 00:27:20.122 } 00:27:20.122 ], 00:27:20.122 "driver_specific": {} 00:27:20.122 } 00:27:20.122 ]' 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:20.122 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.122 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.122 [2024-11-05 15:52:41.308369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:27:20.122 [2024-11-05 15:52:41.308436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.122 [2024-11-05 15:52:41.308467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:20.122 [2024-11-05 15:52:41.308479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.122 [2024-11-05 15:52:41.310681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.123 [2024-11-05 15:52:41.310723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:20.123 Passthru0 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:20.123 { 00:27:20.123 "name": "Malloc0", 00:27:20.123 "aliases": [ 00:27:20.123 "25327f1d-62e4-4077-9049-7931180ac2cb" 00:27:20.123 ], 00:27:20.123 "product_name": "Malloc disk", 00:27:20.123 "block_size": 512, 00:27:20.123 "num_blocks": 16384, 00:27:20.123 "uuid": "25327f1d-62e4-4077-9049-7931180ac2cb", 00:27:20.123 "assigned_rate_limits": { 00:27:20.123 "rw_ios_per_sec": 0, 00:27:20.123 "rw_mbytes_per_sec": 0, 00:27:20.123 "r_mbytes_per_sec": 0, 00:27:20.123 "w_mbytes_per_sec": 0 00:27:20.123 }, 00:27:20.123 "claimed": true, 00:27:20.123 "claim_type": "exclusive_write", 00:27:20.123 "zoned": false, 00:27:20.123 "supported_io_types": { 00:27:20.123 "read": true, 00:27:20.123 "write": true, 00:27:20.123 "unmap": true, 00:27:20.123 "flush": true, 00:27:20.123 "reset": true, 00:27:20.123 "nvme_admin": false, 00:27:20.123 "nvme_io": false, 00:27:20.123 "nvme_io_md": false, 00:27:20.123 "write_zeroes": true, 00:27:20.123 "zcopy": true, 00:27:20.123 "get_zone_info": false, 00:27:20.123 "zone_management": false, 00:27:20.123 "zone_append": false, 00:27:20.123 "compare": false, 00:27:20.123 "compare_and_write": false, 00:27:20.123 "abort": true, 00:27:20.123 "seek_hole": false, 00:27:20.123 "seek_data": false, 00:27:20.123 "copy": true, 00:27:20.123 "nvme_iov_md": false 00:27:20.123 }, 00:27:20.123 "memory_domains": [ 00:27:20.123 { 00:27:20.123 "dma_device_id": "system", 00:27:20.123 "dma_device_type": 1 00:27:20.123 }, 00:27:20.123 { 00:27:20.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.123 "dma_device_type": 2 00:27:20.123 } 00:27:20.123 ], 00:27:20.123 "driver_specific": {} 00:27:20.123 }, 00:27:20.123 { 00:27:20.123 "name": "Passthru0", 00:27:20.123 "aliases": [ 00:27:20.123 "e496a53d-1db1-5f1f-b36c-71e1248ba259" 00:27:20.123 ], 00:27:20.123 "product_name": "passthru", 00:27:20.123 "block_size": 512, 00:27:20.123 "num_blocks": 16384, 00:27:20.123 "uuid": "e496a53d-1db1-5f1f-b36c-71e1248ba259", 00:27:20.123 "assigned_rate_limits": { 00:27:20.123 "rw_ios_per_sec": 0, 00:27:20.123 "rw_mbytes_per_sec": 0, 00:27:20.123 "r_mbytes_per_sec": 0, 00:27:20.123 "w_mbytes_per_sec": 0 00:27:20.123 }, 00:27:20.123 "claimed": false, 00:27:20.123 "zoned": false, 00:27:20.123 "supported_io_types": { 00:27:20.123 "read": true, 00:27:20.123 "write": true, 00:27:20.123 "unmap": true, 00:27:20.123 "flush": true, 00:27:20.123 "reset": true, 00:27:20.123 "nvme_admin": false, 00:27:20.123 "nvme_io": false, 00:27:20.123 "nvme_io_md": false, 00:27:20.123 "write_zeroes": true, 00:27:20.123 "zcopy": true, 00:27:20.123 "get_zone_info": false, 00:27:20.123 "zone_management": false, 00:27:20.123 "zone_append": false, 00:27:20.123 "compare": false, 00:27:20.123 "compare_and_write": false, 00:27:20.123 "abort": true, 00:27:20.123 "seek_hole": false, 00:27:20.123 "seek_data": false, 00:27:20.123 "copy": true, 00:27:20.123 "nvme_iov_md": false 00:27:20.123 }, 00:27:20.123 "memory_domains": [ 00:27:20.123 { 00:27:20.123 "dma_device_id": "system", 00:27:20.123 "dma_device_type": 1 00:27:20.123 }, 00:27:20.123 { 00:27:20.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.123 "dma_device_type": 2 00:27:20.123 } 00:27:20.123 ], 00:27:20.123 "driver_specific": { 00:27:20.123 "passthru": { 00:27:20.123 "name": "Passthru0", 00:27:20.123 "base_bdev_name": "Malloc0" 00:27:20.123 } 00:27:20.123 } 00:27:20.123 } 00:27:20.123 ]' 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:27:20.123 15:52:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:20.123 00:27:20.123 real 0m0.233s 00:27:20.123 user 0m0.130s 00:27:20.123 sys 0m0.027s 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:20.123 15:52:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.123 ************************************ 00:27:20.123 END TEST rpc_integrity 00:27:20.123 ************************************ 00:27:20.123 15:52:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:27:20.123 15:52:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:20.123 15:52:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:20.123 15:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:20.123 ************************************ 00:27:20.123 START TEST rpc_plugins 00:27:20.123 ************************************ 00:27:20.123 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:27:20.123 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:27:20.123 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.123 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:27:20.381 { 00:27:20.381 "name": "Malloc1", 00:27:20.381 "aliases": [ 00:27:20.381 "97cbacc1-f912-42d9-9839-ffbb4059ca88" 00:27:20.381 ], 00:27:20.381 "product_name": "Malloc disk", 00:27:20.381 "block_size": 4096, 00:27:20.381 "num_blocks": 256, 00:27:20.381 "uuid": "97cbacc1-f912-42d9-9839-ffbb4059ca88", 00:27:20.381 "assigned_rate_limits": { 00:27:20.381 "rw_ios_per_sec": 0, 00:27:20.381 "rw_mbytes_per_sec": 0, 00:27:20.381 "r_mbytes_per_sec": 0, 00:27:20.381 "w_mbytes_per_sec": 0 00:27:20.381 }, 00:27:20.381 "claimed": false, 00:27:20.381 "zoned": false, 00:27:20.381 "supported_io_types": { 00:27:20.381 "read": true, 00:27:20.381 "write": true, 00:27:20.381 "unmap": true, 00:27:20.381 "flush": true, 00:27:20.381 "reset": true, 00:27:20.381 "nvme_admin": false, 00:27:20.381 "nvme_io": false, 00:27:20.381 "nvme_io_md": false, 00:27:20.381 "write_zeroes": true, 00:27:20.381 "zcopy": true, 00:27:20.381 "get_zone_info": false, 00:27:20.381 "zone_management": false, 00:27:20.381 "zone_append": false, 00:27:20.381 "compare": false, 00:27:20.381 "compare_and_write": false, 00:27:20.381 "abort": true, 00:27:20.381 "seek_hole": false, 00:27:20.381 "seek_data": false, 00:27:20.381 "copy": true, 00:27:20.381 "nvme_iov_md": false 00:27:20.381 }, 00:27:20.381 "memory_domains": [ 00:27:20.381 { 00:27:20.381 "dma_device_id": "system", 00:27:20.381 "dma_device_type": 1 00:27:20.381 }, 00:27:20.381 { 00:27:20.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.381 "dma_device_type": 2 00:27:20.381 } 00:27:20.381 ], 00:27:20.381 "driver_specific": {} 00:27:20.381 } 00:27:20.381 ]' 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:27:20.381 ************************************ 00:27:20.381 END TEST rpc_plugins 00:27:20.381 ************************************ 00:27:20.381 15:52:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:27:20.381 00:27:20.381 real 0m0.108s 00:27:20.381 user 0m0.062s 00:27:20.381 sys 0m0.013s 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:20.381 15:52:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 15:52:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:27:20.381 15:52:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:20.381 15:52:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:20.381 15:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 ************************************ 00:27:20.381 START TEST rpc_trace_cmd_test 00:27:20.381 ************************************ 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.381 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:27:20.381 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57119", 00:27:20.381 "tpoint_group_mask": "0x8", 00:27:20.381 "iscsi_conn": { 00:27:20.382 "mask": "0x2", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "scsi": { 00:27:20.382 "mask": "0x4", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "bdev": { 00:27:20.382 "mask": "0x8", 00:27:20.382 "tpoint_mask": "0xffffffffffffffff" 00:27:20.382 }, 00:27:20.382 "nvmf_rdma": { 00:27:20.382 "mask": "0x10", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "nvmf_tcp": { 00:27:20.382 "mask": "0x20", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "ftl": { 00:27:20.382 "mask": "0x40", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "blobfs": { 00:27:20.382 "mask": "0x80", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "dsa": { 00:27:20.382 "mask": "0x200", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "thread": { 00:27:20.382 "mask": "0x400", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "nvme_pcie": { 00:27:20.382 "mask": "0x800", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "iaa": { 00:27:20.382 "mask": "0x1000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "nvme_tcp": { 00:27:20.382 "mask": "0x2000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "bdev_nvme": { 00:27:20.382 "mask": "0x4000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "sock": { 00:27:20.382 "mask": "0x8000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "blob": { 00:27:20.382 "mask": "0x10000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "bdev_raid": { 00:27:20.382 "mask": "0x20000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 }, 00:27:20.382 "scheduler": { 00:27:20.382 "mask": "0x40000", 00:27:20.382 "tpoint_mask": "0x0" 00:27:20.382 } 00:27:20.382 }' 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:27:20.382 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:27:20.641 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:27:20.641 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:27:20.641 ************************************ 00:27:20.641 END TEST rpc_trace_cmd_test 00:27:20.641 ************************************ 00:27:20.641 15:52:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:27:20.641 00:27:20.641 real 0m0.161s 00:27:20.641 user 0m0.131s 00:27:20.641 sys 0m0.020s 00:27:20.641 15:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:20.641 15:52:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 15:52:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:27:20.641 15:52:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:27:20.641 15:52:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:27:20.641 15:52:41 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:20.641 15:52:41 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:20.641 15:52:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 ************************************ 00:27:20.641 START TEST rpc_daemon_integrity 00:27:20.641 ************************************ 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:20.641 { 00:27:20.641 "name": "Malloc2", 00:27:20.641 "aliases": [ 00:27:20.641 "68502c76-79ef-4bbd-aab4-7a8b49caec17" 00:27:20.641 ], 00:27:20.641 "product_name": "Malloc disk", 00:27:20.641 "block_size": 512, 00:27:20.641 "num_blocks": 16384, 00:27:20.641 "uuid": "68502c76-79ef-4bbd-aab4-7a8b49caec17", 00:27:20.641 "assigned_rate_limits": { 00:27:20.641 "rw_ios_per_sec": 0, 00:27:20.641 "rw_mbytes_per_sec": 0, 00:27:20.641 "r_mbytes_per_sec": 0, 00:27:20.641 "w_mbytes_per_sec": 0 00:27:20.641 }, 00:27:20.641 "claimed": false, 00:27:20.641 "zoned": false, 00:27:20.641 "supported_io_types": { 00:27:20.641 "read": true, 00:27:20.641 "write": true, 00:27:20.641 "unmap": true, 00:27:20.641 "flush": true, 00:27:20.641 "reset": true, 00:27:20.641 "nvme_admin": false, 00:27:20.641 "nvme_io": false, 00:27:20.641 "nvme_io_md": false, 00:27:20.641 "write_zeroes": true, 00:27:20.641 "zcopy": true, 00:27:20.641 "get_zone_info": false, 00:27:20.641 "zone_management": false, 00:27:20.641 "zone_append": false, 00:27:20.641 "compare": false, 00:27:20.641 "compare_and_write": false, 00:27:20.641 "abort": true, 00:27:20.641 "seek_hole": false, 00:27:20.641 "seek_data": false, 00:27:20.641 "copy": true, 00:27:20.641 "nvme_iov_md": false 00:27:20.641 }, 00:27:20.641 "memory_domains": [ 00:27:20.641 { 00:27:20.641 "dma_device_id": "system", 00:27:20.641 "dma_device_type": 1 00:27:20.641 }, 00:27:20.641 { 00:27:20.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.641 "dma_device_type": 2 00:27:20.641 } 00:27:20.641 ], 00:27:20.641 "driver_specific": {} 00:27:20.641 } 00:27:20.641 ]' 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 [2024-11-05 15:52:41.931373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:27:20.641 [2024-11-05 15:52:41.931436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.641 [2024-11-05 15:52:41.931457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:20.641 [2024-11-05 15:52:41.931468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.641 [2024-11-05 15:52:41.934072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.641 [2024-11-05 15:52:41.934116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:20.641 Passthru0 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.641 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:20.641 { 00:27:20.641 "name": "Malloc2", 00:27:20.641 "aliases": [ 00:27:20.641 "68502c76-79ef-4bbd-aab4-7a8b49caec17" 00:27:20.641 ], 00:27:20.641 "product_name": "Malloc disk", 00:27:20.641 "block_size": 512, 00:27:20.641 "num_blocks": 16384, 00:27:20.641 "uuid": "68502c76-79ef-4bbd-aab4-7a8b49caec17", 00:27:20.641 "assigned_rate_limits": { 00:27:20.641 "rw_ios_per_sec": 0, 00:27:20.641 "rw_mbytes_per_sec": 0, 00:27:20.641 "r_mbytes_per_sec": 0, 00:27:20.641 "w_mbytes_per_sec": 0 00:27:20.641 }, 00:27:20.641 "claimed": true, 00:27:20.641 "claim_type": "exclusive_write", 00:27:20.641 "zoned": false, 00:27:20.641 "supported_io_types": { 00:27:20.641 "read": true, 00:27:20.641 "write": true, 00:27:20.642 "unmap": true, 00:27:20.642 "flush": true, 00:27:20.642 "reset": true, 00:27:20.642 "nvme_admin": false, 00:27:20.642 "nvme_io": false, 00:27:20.642 "nvme_io_md": false, 00:27:20.642 "write_zeroes": true, 00:27:20.642 "zcopy": true, 00:27:20.642 "get_zone_info": false, 00:27:20.642 "zone_management": false, 00:27:20.642 "zone_append": false, 00:27:20.642 "compare": false, 00:27:20.642 "compare_and_write": false, 00:27:20.642 "abort": true, 00:27:20.642 "seek_hole": false, 00:27:20.642 "seek_data": false, 00:27:20.642 "copy": true, 00:27:20.642 "nvme_iov_md": false 00:27:20.642 }, 00:27:20.642 "memory_domains": [ 00:27:20.642 { 00:27:20.642 "dma_device_id": "system", 00:27:20.642 "dma_device_type": 1 00:27:20.642 }, 00:27:20.642 { 00:27:20.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.642 "dma_device_type": 2 00:27:20.642 } 00:27:20.642 ], 00:27:20.642 "driver_specific": {} 00:27:20.642 }, 00:27:20.642 { 00:27:20.642 "name": "Passthru0", 00:27:20.642 "aliases": [ 00:27:20.642 "5437bb9b-00c6-5d66-a8f4-c8e850919512" 00:27:20.642 ], 00:27:20.642 "product_name": "passthru", 00:27:20.642 "block_size": 512, 00:27:20.642 "num_blocks": 16384, 00:27:20.642 "uuid": "5437bb9b-00c6-5d66-a8f4-c8e850919512", 00:27:20.642 "assigned_rate_limits": { 00:27:20.642 "rw_ios_per_sec": 0, 00:27:20.642 "rw_mbytes_per_sec": 0, 00:27:20.642 "r_mbytes_per_sec": 0, 00:27:20.642 "w_mbytes_per_sec": 0 00:27:20.642 }, 00:27:20.642 "claimed": false, 00:27:20.642 "zoned": false, 00:27:20.642 "supported_io_types": { 00:27:20.642 "read": true, 00:27:20.642 "write": true, 00:27:20.642 "unmap": true, 00:27:20.642 "flush": true, 00:27:20.642 "reset": true, 00:27:20.642 "nvme_admin": false, 00:27:20.642 "nvme_io": false, 00:27:20.642 "nvme_io_md": false, 00:27:20.642 "write_zeroes": true, 00:27:20.642 "zcopy": true, 00:27:20.642 "get_zone_info": false, 00:27:20.642 "zone_management": false, 00:27:20.642 "zone_append": false, 00:27:20.642 "compare": false, 00:27:20.642 "compare_and_write": false, 00:27:20.642 "abort": true, 00:27:20.642 "seek_hole": false, 00:27:20.642 "seek_data": false, 00:27:20.642 "copy": true, 00:27:20.642 "nvme_iov_md": false 00:27:20.642 }, 00:27:20.642 "memory_domains": [ 00:27:20.642 { 00:27:20.642 "dma_device_id": "system", 00:27:20.642 "dma_device_type": 1 00:27:20.642 }, 00:27:20.642 { 00:27:20.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:20.642 "dma_device_type": 2 00:27:20.642 } 00:27:20.642 ], 00:27:20.642 "driver_specific": { 00:27:20.642 "passthru": { 00:27:20.642 "name": "Passthru0", 00:27:20.642 "base_bdev_name": "Malloc2" 00:27:20.642 } 00:27:20.642 } 00:27:20.642 } 00:27:20.642 ]' 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.642 15:52:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:27:20.900 ************************************ 00:27:20.900 END TEST rpc_daemon_integrity 00:27:20.900 ************************************ 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:20.900 00:27:20.900 real 0m0.242s 00:27:20.900 user 0m0.128s 00:27:20.900 sys 0m0.030s 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:20.900 15:52:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:20.900 15:52:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:20.900 15:52:42 rpc -- rpc/rpc.sh@84 -- # killprocess 57119 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@952 -- # '[' -z 57119 ']' 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@956 -- # kill -0 57119 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@957 -- # uname 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57119 00:27:20.900 killing process with pid 57119 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57119' 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@971 -- # kill 57119 00:27:20.900 15:52:42 rpc -- common/autotest_common.sh@976 -- # wait 57119 00:27:22.806 ************************************ 00:27:22.806 END TEST rpc 00:27:22.806 ************************************ 00:27:22.806 00:27:22.806 real 0m3.572s 00:27:22.806 user 0m3.947s 00:27:22.806 sys 0m0.572s 00:27:22.806 15:52:43 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:22.806 15:52:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:22.806 15:52:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:27:22.806 15:52:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:22.806 15:52:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:22.806 15:52:43 -- common/autotest_common.sh@10 -- # set +x 00:27:22.806 ************************************ 00:27:22.806 START TEST skip_rpc 00:27:22.806 ************************************ 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:27:22.806 * Looking for test storage... 00:27:22.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.806 15:52:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.806 --rc genhtml_branch_coverage=1 00:27:22.806 --rc genhtml_function_coverage=1 00:27:22.806 --rc genhtml_legend=1 00:27:22.806 --rc geninfo_all_blocks=1 00:27:22.806 --rc geninfo_unexecuted_blocks=1 00:27:22.806 00:27:22.806 ' 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.806 --rc genhtml_branch_coverage=1 00:27:22.806 --rc genhtml_function_coverage=1 00:27:22.806 --rc genhtml_legend=1 00:27:22.806 --rc geninfo_all_blocks=1 00:27:22.806 --rc geninfo_unexecuted_blocks=1 00:27:22.806 00:27:22.806 ' 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.806 --rc genhtml_branch_coverage=1 00:27:22.806 --rc genhtml_function_coverage=1 00:27:22.806 --rc genhtml_legend=1 00:27:22.806 --rc geninfo_all_blocks=1 00:27:22.806 --rc geninfo_unexecuted_blocks=1 00:27:22.806 00:27:22.806 ' 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:22.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.806 --rc genhtml_branch_coverage=1 00:27:22.806 --rc genhtml_function_coverage=1 00:27:22.806 --rc genhtml_legend=1 00:27:22.806 --rc geninfo_all_blocks=1 00:27:22.806 --rc geninfo_unexecuted_blocks=1 00:27:22.806 00:27:22.806 ' 00:27:22.806 15:52:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:22.806 15:52:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:22.806 15:52:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:22.806 15:52:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:22.806 ************************************ 00:27:22.806 START TEST skip_rpc 00:27:22.806 ************************************ 00:27:22.806 15:52:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:27:22.806 15:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57332 00:27:22.806 15:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:22.807 15:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:27:22.807 15:52:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:27:22.807 [2024-11-05 15:52:43.942444] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:22.807 [2024-11-05 15:52:43.942673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57332 ] 00:27:22.807 [2024-11-05 15:52:44.094842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.091 [2024-11-05 15:52:44.196263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57332 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57332 ']' 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57332 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57332 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57332' 00:27:28.393 killing process with pid 57332 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57332 00:27:28.393 15:52:48 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57332 00:27:28.960 00:27:28.960 real 0m6.233s 00:27:28.960 user 0m5.867s 00:27:28.960 sys 0m0.255s 00:27:28.960 ************************************ 00:27:28.960 END TEST skip_rpc 00:27:28.960 ************************************ 00:27:28.960 15:52:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:28.960 15:52:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:28.960 15:52:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:27:28.960 15:52:50 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:28.960 15:52:50 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:28.960 15:52:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:28.960 ************************************ 00:27:28.960 START TEST skip_rpc_with_json 00:27:28.960 ************************************ 00:27:28.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57430 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57430 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57430 ']' 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:28.960 15:52:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:28.960 [2024-11-05 15:52:50.209452] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:28.960 [2024-11-05 15:52:50.209555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57430 ] 00:27:29.218 [2024-11-05 15:52:50.361844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.218 [2024-11-05 15:52:50.448951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:29.784 [2024-11-05 15:52:51.057645] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:27:29.784 request: 00:27:29.784 { 00:27:29.784 "trtype": "tcp", 00:27:29.784 "method": "nvmf_get_transports", 00:27:29.784 "req_id": 1 00:27:29.784 } 00:27:29.784 Got JSON-RPC error response 00:27:29.784 response: 00:27:29.784 { 00:27:29.784 "code": -19, 00:27:29.784 "message": "No such device" 00:27:29.784 } 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:29.784 [2024-11-05 15:52:51.065751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.784 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:30.042 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.042 15:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:30.042 { 00:27:30.042 "subsystems": [ 00:27:30.042 { 00:27:30.042 "subsystem": "fsdev", 00:27:30.042 "config": [ 00:27:30.042 { 00:27:30.042 "method": "fsdev_set_opts", 00:27:30.042 "params": { 00:27:30.042 "fsdev_io_pool_size": 65535, 00:27:30.042 "fsdev_io_cache_size": 256 00:27:30.042 } 00:27:30.042 } 00:27:30.042 ] 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "subsystem": "keyring", 00:27:30.042 "config": [] 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "subsystem": "iobuf", 00:27:30.042 "config": [ 00:27:30.042 { 00:27:30.042 "method": "iobuf_set_options", 00:27:30.042 "params": { 00:27:30.042 "small_pool_count": 8192, 00:27:30.042 "large_pool_count": 1024, 00:27:30.042 "small_bufsize": 8192, 00:27:30.042 "large_bufsize": 135168, 00:27:30.042 "enable_numa": false 00:27:30.042 } 00:27:30.042 } 00:27:30.042 ] 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "subsystem": "sock", 00:27:30.042 "config": [ 00:27:30.042 { 00:27:30.042 "method": "sock_set_default_impl", 00:27:30.042 "params": { 00:27:30.042 "impl_name": "posix" 00:27:30.042 } 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "method": "sock_impl_set_options", 00:27:30.042 "params": { 00:27:30.042 "impl_name": "ssl", 00:27:30.042 "recv_buf_size": 4096, 00:27:30.042 "send_buf_size": 4096, 00:27:30.042 "enable_recv_pipe": true, 00:27:30.042 "enable_quickack": false, 00:27:30.042 "enable_placement_id": 0, 00:27:30.042 "enable_zerocopy_send_server": true, 00:27:30.042 "enable_zerocopy_send_client": false, 00:27:30.042 "zerocopy_threshold": 0, 00:27:30.042 "tls_version": 0, 00:27:30.042 "enable_ktls": false 00:27:30.042 } 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "method": "sock_impl_set_options", 00:27:30.042 "params": { 00:27:30.042 "impl_name": "posix", 00:27:30.042 "recv_buf_size": 2097152, 00:27:30.042 "send_buf_size": 2097152, 00:27:30.042 "enable_recv_pipe": true, 00:27:30.042 "enable_quickack": false, 00:27:30.042 "enable_placement_id": 0, 00:27:30.042 "enable_zerocopy_send_server": true, 00:27:30.042 "enable_zerocopy_send_client": false, 00:27:30.042 "zerocopy_threshold": 0, 00:27:30.042 "tls_version": 0, 00:27:30.042 "enable_ktls": false 00:27:30.042 } 00:27:30.042 } 00:27:30.042 ] 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "subsystem": "vmd", 00:27:30.042 "config": [] 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "subsystem": "accel", 00:27:30.042 "config": [ 00:27:30.042 { 00:27:30.042 "method": "accel_set_options", 00:27:30.042 "params": { 00:27:30.042 "small_cache_size": 128, 00:27:30.042 "large_cache_size": 16, 00:27:30.042 "task_count": 2048, 00:27:30.042 "sequence_count": 2048, 00:27:30.042 "buf_count": 2048 00:27:30.042 } 00:27:30.042 } 00:27:30.042 ] 00:27:30.042 }, 00:27:30.042 { 00:27:30.042 "subsystem": "bdev", 00:27:30.042 "config": [ 00:27:30.042 { 00:27:30.042 "method": "bdev_set_options", 00:27:30.042 "params": { 00:27:30.043 "bdev_io_pool_size": 65535, 00:27:30.043 "bdev_io_cache_size": 256, 00:27:30.043 "bdev_auto_examine": true, 00:27:30.043 "iobuf_small_cache_size": 128, 00:27:30.043 "iobuf_large_cache_size": 16 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "bdev_raid_set_options", 00:27:30.043 "params": { 00:27:30.043 "process_window_size_kb": 1024, 00:27:30.043 "process_max_bandwidth_mb_sec": 0 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "bdev_iscsi_set_options", 00:27:30.043 "params": { 00:27:30.043 "timeout_sec": 30 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "bdev_nvme_set_options", 00:27:30.043 "params": { 00:27:30.043 "action_on_timeout": "none", 00:27:30.043 "timeout_us": 0, 00:27:30.043 "timeout_admin_us": 0, 00:27:30.043 "keep_alive_timeout_ms": 10000, 00:27:30.043 "arbitration_burst": 0, 00:27:30.043 "low_priority_weight": 0, 00:27:30.043 "medium_priority_weight": 0, 00:27:30.043 "high_priority_weight": 0, 00:27:30.043 "nvme_adminq_poll_period_us": 10000, 00:27:30.043 "nvme_ioq_poll_period_us": 0, 00:27:30.043 "io_queue_requests": 0, 00:27:30.043 "delay_cmd_submit": true, 00:27:30.043 "transport_retry_count": 4, 00:27:30.043 "bdev_retry_count": 3, 00:27:30.043 "transport_ack_timeout": 0, 00:27:30.043 "ctrlr_loss_timeout_sec": 0, 00:27:30.043 "reconnect_delay_sec": 0, 00:27:30.043 "fast_io_fail_timeout_sec": 0, 00:27:30.043 "disable_auto_failback": false, 00:27:30.043 "generate_uuids": false, 00:27:30.043 "transport_tos": 0, 00:27:30.043 "nvme_error_stat": false, 00:27:30.043 "rdma_srq_size": 0, 00:27:30.043 "io_path_stat": false, 00:27:30.043 "allow_accel_sequence": false, 00:27:30.043 "rdma_max_cq_size": 0, 00:27:30.043 "rdma_cm_event_timeout_ms": 0, 00:27:30.043 "dhchap_digests": [ 00:27:30.043 "sha256", 00:27:30.043 "sha384", 00:27:30.043 "sha512" 00:27:30.043 ], 00:27:30.043 "dhchap_dhgroups": [ 00:27:30.043 "null", 00:27:30.043 "ffdhe2048", 00:27:30.043 "ffdhe3072", 00:27:30.043 "ffdhe4096", 00:27:30.043 "ffdhe6144", 00:27:30.043 "ffdhe8192" 00:27:30.043 ] 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "bdev_nvme_set_hotplug", 00:27:30.043 "params": { 00:27:30.043 "period_us": 100000, 00:27:30.043 "enable": false 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "bdev_wait_for_examine" 00:27:30.043 } 00:27:30.043 ] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "scsi", 00:27:30.043 "config": null 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "scheduler", 00:27:30.043 "config": [ 00:27:30.043 { 00:27:30.043 "method": "framework_set_scheduler", 00:27:30.043 "params": { 00:27:30.043 "name": "static" 00:27:30.043 } 00:27:30.043 } 00:27:30.043 ] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "vhost_scsi", 00:27:30.043 "config": [] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "vhost_blk", 00:27:30.043 "config": [] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "ublk", 00:27:30.043 "config": [] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "nbd", 00:27:30.043 "config": [] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "nvmf", 00:27:30.043 "config": [ 00:27:30.043 { 00:27:30.043 "method": "nvmf_set_config", 00:27:30.043 "params": { 00:27:30.043 "discovery_filter": "match_any", 00:27:30.043 "admin_cmd_passthru": { 00:27:30.043 "identify_ctrlr": false 00:27:30.043 }, 00:27:30.043 "dhchap_digests": [ 00:27:30.043 "sha256", 00:27:30.043 "sha384", 00:27:30.043 "sha512" 00:27:30.043 ], 00:27:30.043 "dhchap_dhgroups": [ 00:27:30.043 "null", 00:27:30.043 "ffdhe2048", 00:27:30.043 "ffdhe3072", 00:27:30.043 "ffdhe4096", 00:27:30.043 "ffdhe6144", 00:27:30.043 "ffdhe8192" 00:27:30.043 ] 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "nvmf_set_max_subsystems", 00:27:30.043 "params": { 00:27:30.043 "max_subsystems": 1024 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "nvmf_set_crdt", 00:27:30.043 "params": { 00:27:30.043 "crdt1": 0, 00:27:30.043 "crdt2": 0, 00:27:30.043 "crdt3": 0 00:27:30.043 } 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "method": "nvmf_create_transport", 00:27:30.043 "params": { 00:27:30.043 "trtype": "TCP", 00:27:30.043 "max_queue_depth": 128, 00:27:30.043 "max_io_qpairs_per_ctrlr": 127, 00:27:30.043 "in_capsule_data_size": 4096, 00:27:30.043 "max_io_size": 131072, 00:27:30.043 "io_unit_size": 131072, 00:27:30.043 "max_aq_depth": 128, 00:27:30.043 "num_shared_buffers": 511, 00:27:30.043 "buf_cache_size": 4294967295, 00:27:30.043 "dif_insert_or_strip": false, 00:27:30.043 "zcopy": false, 00:27:30.043 "c2h_success": true, 00:27:30.043 "sock_priority": 0, 00:27:30.043 "abort_timeout_sec": 1, 00:27:30.043 "ack_timeout": 0, 00:27:30.043 "data_wr_pool_size": 0 00:27:30.043 } 00:27:30.043 } 00:27:30.043 ] 00:27:30.043 }, 00:27:30.043 { 00:27:30.043 "subsystem": "iscsi", 00:27:30.043 "config": [ 00:27:30.043 { 00:27:30.043 "method": "iscsi_set_options", 00:27:30.043 "params": { 00:27:30.043 "node_base": "iqn.2016-06.io.spdk", 00:27:30.043 "max_sessions": 128, 00:27:30.043 "max_connections_per_session": 2, 00:27:30.043 "max_queue_depth": 64, 00:27:30.043 "default_time2wait": 2, 00:27:30.043 "default_time2retain": 20, 00:27:30.043 "first_burst_length": 8192, 00:27:30.043 "immediate_data": true, 00:27:30.043 "allow_duplicated_isid": false, 00:27:30.043 "error_recovery_level": 0, 00:27:30.043 "nop_timeout": 60, 00:27:30.043 "nop_in_interval": 30, 00:27:30.043 "disable_chap": false, 00:27:30.043 "require_chap": false, 00:27:30.043 "mutual_chap": false, 00:27:30.043 "chap_group": 0, 00:27:30.043 "max_large_datain_per_connection": 64, 00:27:30.043 "max_r2t_per_connection": 4, 00:27:30.043 "pdu_pool_size": 36864, 00:27:30.043 "immediate_data_pool_size": 16384, 00:27:30.043 "data_out_pool_size": 2048 00:27:30.043 } 00:27:30.043 } 00:27:30.043 ] 00:27:30.043 } 00:27:30.043 ] 00:27:30.043 } 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57430 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57430 ']' 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57430 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57430 00:27:30.043 killing process with pid 57430 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57430' 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57430 00:27:30.043 15:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57430 00:27:31.416 15:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57464 00:27:31.416 15:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:27:31.416 15:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:36.719 15:52:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57464 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57464 ']' 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57464 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57464 00:27:36.720 killing process with pid 57464 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57464' 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57464 00:27:36.720 15:52:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57464 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:37.651 ************************************ 00:27:37.651 END TEST skip_rpc_with_json 00:27:37.651 ************************************ 00:27:37.651 00:27:37.651 real 0m8.535s 00:27:37.651 user 0m8.191s 00:27:37.651 sys 0m0.546s 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:37.651 15:52:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:27:37.651 15:52:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:37.651 15:52:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:37.651 15:52:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.651 ************************************ 00:27:37.651 START TEST skip_rpc_with_delay 00:27:37.651 ************************************ 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:27:37.651 [2024-11-05 15:52:58.794863] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:37.651 00:27:37.651 real 0m0.131s 00:27:37.651 user 0m0.070s 00:27:37.651 sys 0m0.057s 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:37.651 15:52:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:27:37.651 ************************************ 00:27:37.651 END TEST skip_rpc_with_delay 00:27:37.651 ************************************ 00:27:37.651 15:52:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:27:37.651 15:52:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:27:37.651 15:52:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:27:37.651 15:52:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:37.651 15:52:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:37.651 15:52:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:37.651 ************************************ 00:27:37.651 START TEST exit_on_failed_rpc_init 00:27:37.651 ************************************ 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57587 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57587 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57587 ']' 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:37.651 15:52:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:37.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.652 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.652 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:37.652 15:52:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:27:37.652 [2024-11-05 15:52:58.971559] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:37.652 [2024-11-05 15:52:58.971667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57587 ] 00:27:37.909 [2024-11-05 15:52:59.125070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.909 [2024-11-05 15:52:59.209313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:27:38.474 15:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:27:38.730 [2024-11-05 15:52:59.842823] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:38.730 [2024-11-05 15:52:59.842967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57599 ] 00:27:38.730 [2024-11-05 15:53:00.007151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.988 [2024-11-05 15:53:00.107520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.988 [2024-11-05 15:53:00.107622] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:38.988 [2024-11-05 15:53:00.107640] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:38.988 [2024-11-05 15:53:00.107658] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57587 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57587 ']' 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57587 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57587 00:27:38.988 killing process with pid 57587 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57587' 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57587 00:27:38.988 15:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57587 00:27:40.360 ************************************ 00:27:40.360 END TEST exit_on_failed_rpc_init 00:27:40.360 ************************************ 00:27:40.360 00:27:40.360 real 0m2.653s 00:27:40.360 user 0m2.919s 00:27:40.360 sys 0m0.415s 00:27:40.360 15:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.360 15:53:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:27:40.360 15:53:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:40.360 ************************************ 00:27:40.360 END TEST skip_rpc 00:27:40.360 ************************************ 00:27:40.360 00:27:40.360 real 0m17.855s 00:27:40.360 user 0m17.178s 00:27:40.360 sys 0m1.433s 00:27:40.360 15:53:01 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.360 15:53:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:40.360 15:53:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:27:40.360 15:53:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:40.360 15:53:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:40.360 15:53:01 -- common/autotest_common.sh@10 -- # set +x 00:27:40.360 ************************************ 00:27:40.360 START TEST rpc_client 00:27:40.360 ************************************ 00:27:40.360 15:53:01 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:27:40.360 * Looking for test storage... 00:27:40.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:27:40.360 15:53:01 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:40.360 15:53:01 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:40.360 15:53:01 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:27:40.617 15:53:01 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:27:40.617 15:53:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:27:40.618 15:53:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.618 15:53:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:27:40.618 15:53:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.618 15:53:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.618 15:53:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.618 15:53:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:27:40.618 OK 00:27:40.618 15:53:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:27:40.618 00:27:40.618 real 0m0.195s 00:27:40.618 user 0m0.121s 00:27:40.618 sys 0m0.081s 00:27:40.618 ************************************ 00:27:40.618 END TEST rpc_client 00:27:40.618 ************************************ 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.618 15:53:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:27:40.618 15:53:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:27:40.618 15:53:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:40.618 15:53:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:40.618 15:53:01 -- common/autotest_common.sh@10 -- # set +x 00:27:40.618 ************************************ 00:27:40.618 START TEST json_config 00:27:40.618 ************************************ 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.618 15:53:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.618 15:53:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.618 15:53:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.618 15:53:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.618 15:53:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.618 15:53:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:27:40.618 15:53:01 json_config -- scripts/common.sh@345 -- # : 1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.618 15:53:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.618 15:53:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@353 -- # local d=1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.618 15:53:01 json_config -- scripts/common.sh@355 -- # echo 1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.618 15:53:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@353 -- # local d=2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.618 15:53:01 json_config -- scripts/common.sh@355 -- # echo 2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.618 15:53:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.618 15:53:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.618 15:53:01 json_config -- scripts/common.sh@368 -- # return 0 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:40.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.618 --rc genhtml_branch_coverage=1 00:27:40.618 --rc genhtml_function_coverage=1 00:27:40.618 --rc genhtml_legend=1 00:27:40.618 --rc geninfo_all_blocks=1 00:27:40.618 --rc geninfo_unexecuted_blocks=1 00:27:40.618 00:27:40.618 ' 00:27:40.618 15:53:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.618 15:53:01 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1f363de5-7a80-42b1-b2e8-064deed1963e 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=1f363de5-7a80-42b1-b2e8-064deed1963e 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.877 15:53:01 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:40.877 15:53:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.877 15:53:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.877 15:53:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.877 15:53:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.878 15:53:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.878 15:53:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.878 15:53:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.878 15:53:01 json_config -- paths/export.sh@5 -- # export PATH 00:27:40.878 15:53:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:40.878 15:53:01 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:40.878 15:53:01 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:40.878 15:53:01 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@50 -- # : 0 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:40.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:40.878 15:53:01 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:27:40.878 WARNING: No tests are enabled so not running JSON configuration tests 00:27:40.878 15:53:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:27:40.878 00:27:40.878 real 0m0.154s 00:27:40.878 user 0m0.101s 00:27:40.878 sys 0m0.047s 00:27:40.878 15:53:01 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:40.878 15:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:27:40.878 ************************************ 00:27:40.878 END TEST json_config 00:27:40.878 ************************************ 00:27:40.878 15:53:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:27:40.878 15:53:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:40.878 15:53:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:40.878 15:53:02 -- common/autotest_common.sh@10 -- # set +x 00:27:40.878 ************************************ 00:27:40.878 START TEST json_config_extra_key 00:27:40.878 ************************************ 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.878 15:53:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.878 --rc genhtml_branch_coverage=1 00:27:40.878 --rc genhtml_function_coverage=1 00:27:40.878 --rc genhtml_legend=1 00:27:40.878 --rc geninfo_all_blocks=1 00:27:40.878 --rc geninfo_unexecuted_blocks=1 00:27:40.878 00:27:40.878 ' 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.878 --rc genhtml_branch_coverage=1 00:27:40.878 --rc genhtml_function_coverage=1 00:27:40.878 --rc genhtml_legend=1 00:27:40.878 --rc geninfo_all_blocks=1 00:27:40.878 --rc geninfo_unexecuted_blocks=1 00:27:40.878 00:27:40.878 ' 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.878 --rc genhtml_branch_coverage=1 00:27:40.878 --rc genhtml_function_coverage=1 00:27:40.878 --rc genhtml_legend=1 00:27:40.878 --rc geninfo_all_blocks=1 00:27:40.878 --rc geninfo_unexecuted_blocks=1 00:27:40.878 00:27:40.878 ' 00:27:40.878 15:53:02 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.878 --rc genhtml_branch_coverage=1 00:27:40.878 --rc genhtml_function_coverage=1 00:27:40.878 --rc genhtml_legend=1 00:27:40.878 --rc geninfo_all_blocks=1 00:27:40.878 --rc geninfo_unexecuted_blocks=1 00:27:40.878 00:27:40.878 ' 00:27:40.878 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1f363de5-7a80-42b1-b2e8-064deed1963e 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=1f363de5-7a80-42b1-b2e8-064deed1963e 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:40.879 15:53:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.879 15:53:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.879 15:53:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.879 15:53:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.879 15:53:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.879 15:53:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.879 15:53:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.879 15:53:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:27:40.879 15:53:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:40.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:40.879 15:53:02 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:27:40.879 INFO: launching applications... 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:27:40.879 15:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57793 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:27:40.879 Waiting for target to run... 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57793 /var/tmp/spdk_tgt.sock 00:27:40.879 15:53:02 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57793 ']' 00:27:40.879 15:53:02 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:27:40.879 15:53:02 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.879 15:53:02 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:27:40.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:27:40.879 15:53:02 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.879 15:53:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:27:40.879 15:53:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:27:41.255 [2024-11-05 15:53:02.245230] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:41.255 [2024-11-05 15:53:02.245502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57793 ] 00:27:41.255 [2024-11-05 15:53:02.566307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.514 [2024-11-05 15:53:02.659583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.774 00:27:41.774 INFO: shutting down applications... 00:27:41.774 15:53:03 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.774 15:53:03 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:27:41.774 15:53:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:27:41.774 15:53:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57793 ]] 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57793 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57793 00:27:41.774 15:53:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:42.339 15:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:42.339 15:53:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:42.339 15:53:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57793 00:27:42.339 15:53:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:42.906 15:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:42.906 15:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:42.906 15:53:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57793 00:27:42.906 15:53:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57793 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:27:43.482 SPDK target shutdown done 00:27:43.482 Success 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:27:43.482 15:53:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:27:43.483 15:53:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:27:43.483 00:27:43.483 real 0m2.573s 00:27:43.483 user 0m2.347s 00:27:43.483 sys 0m0.389s 00:27:43.483 ************************************ 00:27:43.483 END TEST json_config_extra_key 00:27:43.483 ************************************ 00:27:43.483 15:53:04 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:43.483 15:53:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 15:53:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:43.483 15:53:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:43.483 15:53:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:43.483 15:53:04 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 ************************************ 00:27:43.483 START TEST alias_rpc 00:27:43.483 ************************************ 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:43.483 * Looking for test storage... 00:27:43.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.483 15:53:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.483 --rc genhtml_branch_coverage=1 00:27:43.483 --rc genhtml_function_coverage=1 00:27:43.483 --rc genhtml_legend=1 00:27:43.483 --rc geninfo_all_blocks=1 00:27:43.483 --rc geninfo_unexecuted_blocks=1 00:27:43.483 00:27:43.483 ' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.483 --rc genhtml_branch_coverage=1 00:27:43.483 --rc genhtml_function_coverage=1 00:27:43.483 --rc genhtml_legend=1 00:27:43.483 --rc geninfo_all_blocks=1 00:27:43.483 --rc geninfo_unexecuted_blocks=1 00:27:43.483 00:27:43.483 ' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.483 --rc genhtml_branch_coverage=1 00:27:43.483 --rc genhtml_function_coverage=1 00:27:43.483 --rc genhtml_legend=1 00:27:43.483 --rc geninfo_all_blocks=1 00:27:43.483 --rc geninfo_unexecuted_blocks=1 00:27:43.483 00:27:43.483 ' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.483 --rc genhtml_branch_coverage=1 00:27:43.483 --rc genhtml_function_coverage=1 00:27:43.483 --rc genhtml_legend=1 00:27:43.483 --rc geninfo_all_blocks=1 00:27:43.483 --rc geninfo_unexecuted_blocks=1 00:27:43.483 00:27:43.483 ' 00:27:43.483 15:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:27:43.483 15:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57879 00:27:43.483 15:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57879 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57879 ']' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:43.483 15:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:43.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:43.483 15:53:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:43.741 [2024-11-05 15:53:04.848566] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:43.741 [2024-11-05 15:53:04.848857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57879 ] 00:27:43.741 [2024-11-05 15:53:05.006021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.998 [2024-11-05 15:53:05.108486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:27:44.563 15:53:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:27:44.563 15:53:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57879 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57879 ']' 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57879 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57879 00:27:44.563 killing process with pid 57879 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57879' 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@971 -- # kill 57879 00:27:44.563 15:53:05 alias_rpc -- common/autotest_common.sh@976 -- # wait 57879 00:27:46.464 00:27:46.464 real 0m2.785s 00:27:46.464 user 0m2.801s 00:27:46.464 sys 0m0.404s 00:27:46.464 15:53:07 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:46.464 15:53:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:46.464 ************************************ 00:27:46.464 END TEST alias_rpc 00:27:46.464 ************************************ 00:27:46.464 15:53:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:27:46.464 15:53:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:46.464 15:53:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:46.464 15:53:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:46.464 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:46.464 ************************************ 00:27:46.464 START TEST spdkcli_tcp 00:27:46.464 ************************************ 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:46.464 * Looking for test storage... 00:27:46.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.464 15:53:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:46.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.464 --rc genhtml_branch_coverage=1 00:27:46.464 --rc genhtml_function_coverage=1 00:27:46.464 --rc genhtml_legend=1 00:27:46.464 --rc geninfo_all_blocks=1 00:27:46.464 --rc geninfo_unexecuted_blocks=1 00:27:46.464 00:27:46.464 ' 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:46.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.464 --rc genhtml_branch_coverage=1 00:27:46.464 --rc genhtml_function_coverage=1 00:27:46.464 --rc genhtml_legend=1 00:27:46.464 --rc geninfo_all_blocks=1 00:27:46.464 --rc geninfo_unexecuted_blocks=1 00:27:46.464 00:27:46.464 ' 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:46.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.464 --rc genhtml_branch_coverage=1 00:27:46.464 --rc genhtml_function_coverage=1 00:27:46.464 --rc genhtml_legend=1 00:27:46.464 --rc geninfo_all_blocks=1 00:27:46.464 --rc geninfo_unexecuted_blocks=1 00:27:46.464 00:27:46.464 ' 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:46.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.464 --rc genhtml_branch_coverage=1 00:27:46.464 --rc genhtml_function_coverage=1 00:27:46.464 --rc genhtml_legend=1 00:27:46.464 --rc geninfo_all_blocks=1 00:27:46.464 --rc geninfo_unexecuted_blocks=1 00:27:46.464 00:27:46.464 ' 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57974 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57974 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57974 ']' 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.464 15:53:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:46.464 15:53:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.464 [2024-11-05 15:53:07.679515] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:46.464 [2024-11-05 15:53:07.679793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:27:46.723 [2024-11-05 15:53:07.838968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:46.723 [2024-11-05 15:53:07.941658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.723 [2024-11-05 15:53:07.941673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.288 15:53:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:47.288 15:53:08 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:27:47.288 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57987 00:27:47.288 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:27:47.288 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:27:47.571 [ 00:27:47.571 "bdev_malloc_delete", 00:27:47.571 "bdev_malloc_create", 00:27:47.571 "bdev_null_resize", 00:27:47.571 "bdev_null_delete", 00:27:47.571 "bdev_null_create", 00:27:47.571 "bdev_nvme_cuse_unregister", 00:27:47.571 "bdev_nvme_cuse_register", 00:27:47.571 "bdev_opal_new_user", 00:27:47.571 "bdev_opal_set_lock_state", 00:27:47.571 "bdev_opal_delete", 00:27:47.571 "bdev_opal_get_info", 00:27:47.571 "bdev_opal_create", 00:27:47.571 "bdev_nvme_opal_revert", 00:27:47.571 "bdev_nvme_opal_init", 00:27:47.571 "bdev_nvme_send_cmd", 00:27:47.571 "bdev_nvme_set_keys", 00:27:47.571 "bdev_nvme_get_path_iostat", 00:27:47.571 "bdev_nvme_get_mdns_discovery_info", 00:27:47.571 "bdev_nvme_stop_mdns_discovery", 00:27:47.571 "bdev_nvme_start_mdns_discovery", 00:27:47.571 "bdev_nvme_set_multipath_policy", 00:27:47.571 "bdev_nvme_set_preferred_path", 00:27:47.571 "bdev_nvme_get_io_paths", 00:27:47.571 "bdev_nvme_remove_error_injection", 00:27:47.571 "bdev_nvme_add_error_injection", 00:27:47.571 "bdev_nvme_get_discovery_info", 00:27:47.571 "bdev_nvme_stop_discovery", 00:27:47.571 "bdev_nvme_start_discovery", 00:27:47.571 "bdev_nvme_get_controller_health_info", 00:27:47.571 "bdev_nvme_disable_controller", 00:27:47.571 "bdev_nvme_enable_controller", 00:27:47.571 "bdev_nvme_reset_controller", 00:27:47.571 "bdev_nvme_get_transport_statistics", 00:27:47.571 "bdev_nvme_apply_firmware", 00:27:47.571 "bdev_nvme_detach_controller", 00:27:47.571 "bdev_nvme_get_controllers", 00:27:47.571 "bdev_nvme_attach_controller", 00:27:47.571 "bdev_nvme_set_hotplug", 00:27:47.571 "bdev_nvme_set_options", 00:27:47.571 "bdev_passthru_delete", 00:27:47.571 "bdev_passthru_create", 00:27:47.571 "bdev_lvol_set_parent_bdev", 00:27:47.571 "bdev_lvol_set_parent", 00:27:47.571 "bdev_lvol_check_shallow_copy", 00:27:47.571 "bdev_lvol_start_shallow_copy", 00:27:47.571 "bdev_lvol_grow_lvstore", 00:27:47.571 "bdev_lvol_get_lvols", 00:27:47.571 "bdev_lvol_get_lvstores", 00:27:47.571 "bdev_lvol_delete", 00:27:47.571 "bdev_lvol_set_read_only", 00:27:47.571 "bdev_lvol_resize", 00:27:47.571 "bdev_lvol_decouple_parent", 00:27:47.571 "bdev_lvol_inflate", 00:27:47.571 "bdev_lvol_rename", 00:27:47.571 "bdev_lvol_clone_bdev", 00:27:47.571 "bdev_lvol_clone", 00:27:47.571 "bdev_lvol_snapshot", 00:27:47.571 "bdev_lvol_create", 00:27:47.571 "bdev_lvol_delete_lvstore", 00:27:47.571 "bdev_lvol_rename_lvstore", 00:27:47.571 "bdev_lvol_create_lvstore", 00:27:47.571 "bdev_raid_set_options", 00:27:47.571 "bdev_raid_remove_base_bdev", 00:27:47.571 "bdev_raid_add_base_bdev", 00:27:47.571 "bdev_raid_delete", 00:27:47.571 "bdev_raid_create", 00:27:47.571 "bdev_raid_get_bdevs", 00:27:47.571 "bdev_error_inject_error", 00:27:47.571 "bdev_error_delete", 00:27:47.571 "bdev_error_create", 00:27:47.571 "bdev_split_delete", 00:27:47.571 "bdev_split_create", 00:27:47.571 "bdev_delay_delete", 00:27:47.571 "bdev_delay_create", 00:27:47.571 "bdev_delay_update_latency", 00:27:47.571 "bdev_zone_block_delete", 00:27:47.571 "bdev_zone_block_create", 00:27:47.571 "blobfs_create", 00:27:47.571 "blobfs_detect", 00:27:47.571 "blobfs_set_cache_size", 00:27:47.571 "bdev_xnvme_delete", 00:27:47.571 "bdev_xnvme_create", 00:27:47.571 "bdev_aio_delete", 00:27:47.571 "bdev_aio_rescan", 00:27:47.571 "bdev_aio_create", 00:27:47.571 "bdev_ftl_set_property", 00:27:47.571 "bdev_ftl_get_properties", 00:27:47.571 "bdev_ftl_get_stats", 00:27:47.571 "bdev_ftl_unmap", 00:27:47.571 "bdev_ftl_unload", 00:27:47.571 "bdev_ftl_delete", 00:27:47.571 "bdev_ftl_load", 00:27:47.571 "bdev_ftl_create", 00:27:47.571 "bdev_virtio_attach_controller", 00:27:47.571 "bdev_virtio_scsi_get_devices", 00:27:47.572 "bdev_virtio_detach_controller", 00:27:47.572 "bdev_virtio_blk_set_hotplug", 00:27:47.572 "bdev_iscsi_delete", 00:27:47.572 "bdev_iscsi_create", 00:27:47.572 "bdev_iscsi_set_options", 00:27:47.572 "accel_error_inject_error", 00:27:47.572 "ioat_scan_accel_module", 00:27:47.572 "dsa_scan_accel_module", 00:27:47.572 "iaa_scan_accel_module", 00:27:47.572 "keyring_file_remove_key", 00:27:47.572 "keyring_file_add_key", 00:27:47.572 "keyring_linux_set_options", 00:27:47.572 "fsdev_aio_delete", 00:27:47.572 "fsdev_aio_create", 00:27:47.572 "iscsi_get_histogram", 00:27:47.572 "iscsi_enable_histogram", 00:27:47.572 "iscsi_set_options", 00:27:47.572 "iscsi_get_auth_groups", 00:27:47.572 "iscsi_auth_group_remove_secret", 00:27:47.572 "iscsi_auth_group_add_secret", 00:27:47.572 "iscsi_delete_auth_group", 00:27:47.572 "iscsi_create_auth_group", 00:27:47.572 "iscsi_set_discovery_auth", 00:27:47.572 "iscsi_get_options", 00:27:47.572 "iscsi_target_node_request_logout", 00:27:47.572 "iscsi_target_node_set_redirect", 00:27:47.572 "iscsi_target_node_set_auth", 00:27:47.572 "iscsi_target_node_add_lun", 00:27:47.572 "iscsi_get_stats", 00:27:47.572 "iscsi_get_connections", 00:27:47.572 "iscsi_portal_group_set_auth", 00:27:47.572 "iscsi_start_portal_group", 00:27:47.572 "iscsi_delete_portal_group", 00:27:47.572 "iscsi_create_portal_group", 00:27:47.572 "iscsi_get_portal_groups", 00:27:47.572 "iscsi_delete_target_node", 00:27:47.572 "iscsi_target_node_remove_pg_ig_maps", 00:27:47.572 "iscsi_target_node_add_pg_ig_maps", 00:27:47.572 "iscsi_create_target_node", 00:27:47.572 "iscsi_get_target_nodes", 00:27:47.572 "iscsi_delete_initiator_group", 00:27:47.572 "iscsi_initiator_group_remove_initiators", 00:27:47.572 "iscsi_initiator_group_add_initiators", 00:27:47.572 "iscsi_create_initiator_group", 00:27:47.572 "iscsi_get_initiator_groups", 00:27:47.572 "nvmf_set_crdt", 00:27:47.572 "nvmf_set_config", 00:27:47.572 "nvmf_set_max_subsystems", 00:27:47.572 "nvmf_stop_mdns_prr", 00:27:47.572 "nvmf_publish_mdns_prr", 00:27:47.572 "nvmf_subsystem_get_listeners", 00:27:47.572 "nvmf_subsystem_get_qpairs", 00:27:47.572 "nvmf_subsystem_get_controllers", 00:27:47.572 "nvmf_get_stats", 00:27:47.572 "nvmf_get_transports", 00:27:47.572 "nvmf_create_transport", 00:27:47.572 "nvmf_get_targets", 00:27:47.572 "nvmf_delete_target", 00:27:47.572 "nvmf_create_target", 00:27:47.572 "nvmf_subsystem_allow_any_host", 00:27:47.572 "nvmf_subsystem_set_keys", 00:27:47.572 "nvmf_subsystem_remove_host", 00:27:47.572 "nvmf_subsystem_add_host", 00:27:47.572 "nvmf_ns_remove_host", 00:27:47.572 "nvmf_ns_add_host", 00:27:47.572 "nvmf_subsystem_remove_ns", 00:27:47.572 "nvmf_subsystem_set_ns_ana_group", 00:27:47.572 "nvmf_subsystem_add_ns", 00:27:47.572 "nvmf_subsystem_listener_set_ana_state", 00:27:47.572 "nvmf_discovery_get_referrals", 00:27:47.572 "nvmf_discovery_remove_referral", 00:27:47.572 "nvmf_discovery_add_referral", 00:27:47.572 "nvmf_subsystem_remove_listener", 00:27:47.572 "nvmf_subsystem_add_listener", 00:27:47.572 "nvmf_delete_subsystem", 00:27:47.572 "nvmf_create_subsystem", 00:27:47.572 "nvmf_get_subsystems", 00:27:47.572 "env_dpdk_get_mem_stats", 00:27:47.572 "nbd_get_disks", 00:27:47.572 "nbd_stop_disk", 00:27:47.572 "nbd_start_disk", 00:27:47.572 "ublk_recover_disk", 00:27:47.572 "ublk_get_disks", 00:27:47.572 "ublk_stop_disk", 00:27:47.572 "ublk_start_disk", 00:27:47.572 "ublk_destroy_target", 00:27:47.572 "ublk_create_target", 00:27:47.572 "virtio_blk_create_transport", 00:27:47.572 "virtio_blk_get_transports", 00:27:47.572 "vhost_controller_set_coalescing", 00:27:47.572 "vhost_get_controllers", 00:27:47.572 "vhost_delete_controller", 00:27:47.572 "vhost_create_blk_controller", 00:27:47.572 "vhost_scsi_controller_remove_target", 00:27:47.572 "vhost_scsi_controller_add_target", 00:27:47.572 "vhost_start_scsi_controller", 00:27:47.572 "vhost_create_scsi_controller", 00:27:47.572 "thread_set_cpumask", 00:27:47.572 "scheduler_set_options", 00:27:47.572 "framework_get_governor", 00:27:47.572 "framework_get_scheduler", 00:27:47.572 "framework_set_scheduler", 00:27:47.572 "framework_get_reactors", 00:27:47.572 "thread_get_io_channels", 00:27:47.572 "thread_get_pollers", 00:27:47.572 "thread_get_stats", 00:27:47.572 "framework_monitor_context_switch", 00:27:47.572 "spdk_kill_instance", 00:27:47.572 "log_enable_timestamps", 00:27:47.572 "log_get_flags", 00:27:47.572 "log_clear_flag", 00:27:47.572 "log_set_flag", 00:27:47.572 "log_get_level", 00:27:47.572 "log_set_level", 00:27:47.572 "log_get_print_level", 00:27:47.572 "log_set_print_level", 00:27:47.572 "framework_enable_cpumask_locks", 00:27:47.572 "framework_disable_cpumask_locks", 00:27:47.572 "framework_wait_init", 00:27:47.572 "framework_start_init", 00:27:47.572 "scsi_get_devices", 00:27:47.572 "bdev_get_histogram", 00:27:47.572 "bdev_enable_histogram", 00:27:47.572 "bdev_set_qos_limit", 00:27:47.572 "bdev_set_qd_sampling_period", 00:27:47.572 "bdev_get_bdevs", 00:27:47.572 "bdev_reset_iostat", 00:27:47.572 "bdev_get_iostat", 00:27:47.572 "bdev_examine", 00:27:47.572 "bdev_wait_for_examine", 00:27:47.572 "bdev_set_options", 00:27:47.572 "accel_get_stats", 00:27:47.572 "accel_set_options", 00:27:47.572 "accel_set_driver", 00:27:47.572 "accel_crypto_key_destroy", 00:27:47.572 "accel_crypto_keys_get", 00:27:47.572 "accel_crypto_key_create", 00:27:47.572 "accel_assign_opc", 00:27:47.572 "accel_get_module_info", 00:27:47.572 "accel_get_opc_assignments", 00:27:47.572 "vmd_rescan", 00:27:47.572 "vmd_remove_device", 00:27:47.572 "vmd_enable", 00:27:47.572 "sock_get_default_impl", 00:27:47.572 "sock_set_default_impl", 00:27:47.572 "sock_impl_set_options", 00:27:47.572 "sock_impl_get_options", 00:27:47.572 "iobuf_get_stats", 00:27:47.572 "iobuf_set_options", 00:27:47.572 "keyring_get_keys", 00:27:47.572 "framework_get_pci_devices", 00:27:47.572 "framework_get_config", 00:27:47.572 "framework_get_subsystems", 00:27:47.572 "fsdev_set_opts", 00:27:47.572 "fsdev_get_opts", 00:27:47.572 "trace_get_info", 00:27:47.572 "trace_get_tpoint_group_mask", 00:27:47.572 "trace_disable_tpoint_group", 00:27:47.572 "trace_enable_tpoint_group", 00:27:47.572 "trace_clear_tpoint_mask", 00:27:47.572 "trace_set_tpoint_mask", 00:27:47.572 "notify_get_notifications", 00:27:47.572 "notify_get_types", 00:27:47.572 "spdk_get_version", 00:27:47.572 "rpc_get_methods" 00:27:47.572 ] 00:27:47.572 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.572 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:47.572 15:53:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57974 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57974 ']' 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57974 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57974 00:27:47.572 killing process with pid 57974 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57974' 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57974 00:27:47.572 15:53:08 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57974 00:27:48.942 ************************************ 00:27:48.942 END TEST spdkcli_tcp 00:27:48.942 ************************************ 00:27:48.942 00:27:48.942 real 0m2.816s 00:27:48.942 user 0m5.026s 00:27:48.942 sys 0m0.409s 00:27:48.942 15:53:10 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:48.942 15:53:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.200 15:53:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:49.200 15:53:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:49.200 15:53:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:49.200 15:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:49.200 ************************************ 00:27:49.200 START TEST dpdk_mem_utility 00:27:49.200 ************************************ 00:27:49.200 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:49.200 * Looking for test storage... 00:27:49.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:27:49.200 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:49.200 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:27:49.200 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:49.200 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.200 15:53:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:27:49.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.201 15:53:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.201 15:53:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.201 15:53:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.201 15:53:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:49.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.201 --rc genhtml_branch_coverage=1 00:27:49.201 --rc genhtml_function_coverage=1 00:27:49.201 --rc genhtml_legend=1 00:27:49.201 --rc geninfo_all_blocks=1 00:27:49.201 --rc geninfo_unexecuted_blocks=1 00:27:49.201 00:27:49.201 ' 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:49.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.201 --rc genhtml_branch_coverage=1 00:27:49.201 --rc genhtml_function_coverage=1 00:27:49.201 --rc genhtml_legend=1 00:27:49.201 --rc geninfo_all_blocks=1 00:27:49.201 --rc geninfo_unexecuted_blocks=1 00:27:49.201 00:27:49.201 ' 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:49.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.201 --rc genhtml_branch_coverage=1 00:27:49.201 --rc genhtml_function_coverage=1 00:27:49.201 --rc genhtml_legend=1 00:27:49.201 --rc geninfo_all_blocks=1 00:27:49.201 --rc geninfo_unexecuted_blocks=1 00:27:49.201 00:27:49.201 ' 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:49.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.201 --rc genhtml_branch_coverage=1 00:27:49.201 --rc genhtml_function_coverage=1 00:27:49.201 --rc genhtml_legend=1 00:27:49.201 --rc geninfo_all_blocks=1 00:27:49.201 --rc geninfo_unexecuted_blocks=1 00:27:49.201 00:27:49.201 ' 00:27:49.201 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:49.201 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58081 00:27:49.201 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58081 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58081 ']' 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:49.201 15:53:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:49.201 15:53:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:49.201 [2024-11-05 15:53:10.534413] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:49.201 [2024-11-05 15:53:10.534516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58081 ] 00:27:49.458 [2024-11-05 15:53:10.694862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.458 [2024-11-05 15:53:10.795390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.023 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:50.023 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:27:50.023 15:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:27:50.023 15:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:27:50.023 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.023 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:50.282 { 00:27:50.282 "filename": "/tmp/spdk_mem_dump.txt" 00:27:50.282 } 00:27:50.283 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.283 15:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:50.283 DPDK memory size 816.000000 MiB in 1 heap(s) 00:27:50.283 1 heaps totaling size 816.000000 MiB 00:27:50.283 size: 816.000000 MiB heap id: 0 00:27:50.283 end heaps---------- 00:27:50.283 9 mempools totaling size 595.772034 MiB 00:27:50.283 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:27:50.283 size: 158.602051 MiB name: PDU_data_out_Pool 00:27:50.283 size: 92.545471 MiB name: bdev_io_58081 00:27:50.283 size: 50.003479 MiB name: msgpool_58081 00:27:50.283 size: 36.509338 MiB name: fsdev_io_58081 00:27:50.283 size: 21.763794 MiB name: PDU_Pool 00:27:50.283 size: 19.513306 MiB name: SCSI_TASK_Pool 00:27:50.283 size: 4.133484 MiB name: evtpool_58081 00:27:50.283 size: 0.026123 MiB name: Session_Pool 00:27:50.283 end mempools------- 00:27:50.283 6 memzones totaling size 4.142822 MiB 00:27:50.283 size: 1.000366 MiB name: RG_ring_0_58081 00:27:50.283 size: 1.000366 MiB name: RG_ring_1_58081 00:27:50.283 size: 1.000366 MiB name: RG_ring_4_58081 00:27:50.283 size: 1.000366 MiB name: RG_ring_5_58081 00:27:50.283 size: 0.125366 MiB name: RG_ring_2_58081 00:27:50.283 size: 0.015991 MiB name: RG_ring_3_58081 00:27:50.283 end memzones------- 00:27:50.283 15:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:27:50.283 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:27:50.283 list of free elements. size: 16.789429 MiB 00:27:50.283 element at address: 0x200006400000 with size: 1.995972 MiB 00:27:50.283 element at address: 0x20000a600000 with size: 1.995972 MiB 00:27:50.283 element at address: 0x200003e00000 with size: 1.991028 MiB 00:27:50.283 element at address: 0x200018d00040 with size: 0.999939 MiB 00:27:50.283 element at address: 0x200019100040 with size: 0.999939 MiB 00:27:50.283 element at address: 0x200019200000 with size: 0.999084 MiB 00:27:50.283 element at address: 0x200031e00000 with size: 0.994324 MiB 00:27:50.283 element at address: 0x200000400000 with size: 0.992004 MiB 00:27:50.283 element at address: 0x200018a00000 with size: 0.959656 MiB 00:27:50.283 element at address: 0x200019500040 with size: 0.936401 MiB 00:27:50.283 element at address: 0x200000200000 with size: 0.716980 MiB 00:27:50.283 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:27:50.283 element at address: 0x200000c00000 with size: 0.490173 MiB 00:27:50.283 element at address: 0x200018e00000 with size: 0.487976 MiB 00:27:50.283 element at address: 0x200019600000 with size: 0.485413 MiB 00:27:50.283 element at address: 0x200012c00000 with size: 0.443237 MiB 00:27:50.283 element at address: 0x200028000000 with size: 0.391418 MiB 00:27:50.283 element at address: 0x200000800000 with size: 0.350891 MiB 00:27:50.283 list of standard malloc elements. size: 199.289673 MiB 00:27:50.283 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:27:50.283 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:27:50.283 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:27:50.283 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:27:50.283 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:27:50.283 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:27:50.283 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:27:50.283 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:27:50.283 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:27:50.283 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:27:50.283 element at address: 0x200012bff040 with size: 0.000305 MiB 00:27:50.283 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:27:50.283 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200000cff000 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200012bff180 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200012bff280 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200012bff380 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200012bff480 with size: 0.000244 MiB 00:27:50.283 element at address: 0x200012bff580 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bff680 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bff780 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bff880 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bff980 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71780 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71880 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71980 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c72080 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012c72180 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:27:50.284 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:27:50.284 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:27:50.284 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:27:50.285 element at address: 0x200028064340 with size: 0.000244 MiB 00:27:50.285 element at address: 0x200028064440 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b100 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b380 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b480 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b580 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b680 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b780 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b880 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806b980 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806be80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c080 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c180 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c280 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c380 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c480 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c580 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c680 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c780 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c880 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806c980 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d080 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d180 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d280 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d380 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d480 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d580 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d680 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d780 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d880 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806d980 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806da80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806db80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806de80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806df80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e080 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e180 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e280 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e380 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e480 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e580 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e680 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e780 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e880 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806e980 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f080 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f180 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f280 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f380 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f480 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f580 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f680 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f780 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f880 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806f980 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:27:50.285 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:27:50.285 list of memzone associated elements. size: 599.920898 MiB 00:27:50.285 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:27:50.285 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:27:50.285 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:27:50.285 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:27:50.285 element at address: 0x200012df4740 with size: 92.045105 MiB 00:27:50.285 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58081_0 00:27:50.285 element at address: 0x200000dff340 with size: 48.003113 MiB 00:27:50.285 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58081_0 00:27:50.285 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:27:50.285 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58081_0 00:27:50.285 element at address: 0x2000197be900 with size: 20.255615 MiB 00:27:50.285 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:27:50.285 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:27:50.285 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:27:50.285 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:27:50.285 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58081_0 00:27:50.285 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:27:50.285 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58081 00:27:50.285 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:27:50.285 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58081 00:27:50.285 element at address: 0x200018efde00 with size: 1.008179 MiB 00:27:50.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:27:50.285 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:27:50.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:27:50.285 element at address: 0x200018afde00 with size: 1.008179 MiB 00:27:50.285 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:27:50.285 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:27:50.285 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:27:50.285 element at address: 0x200000cff100 with size: 1.000549 MiB 00:27:50.285 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58081 00:27:50.285 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:27:50.285 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58081 00:27:50.285 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:27:50.285 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58081 00:27:50.285 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:27:50.285 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58081 00:27:50.285 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:27:50.285 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58081 00:27:50.285 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:27:50.285 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58081 00:27:50.285 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:27:50.285 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:27:50.285 element at address: 0x200012c72280 with size: 0.500549 MiB 00:27:50.285 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:27:50.285 element at address: 0x20001967c440 with size: 0.250549 MiB 00:27:50.285 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:27:50.285 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:27:50.285 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58081 00:27:50.285 element at address: 0x20000085df80 with size: 0.125549 MiB 00:27:50.285 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58081 00:27:50.285 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:27:50.285 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:27:50.285 element at address: 0x200028064540 with size: 0.023804 MiB 00:27:50.285 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:27:50.285 element at address: 0x200000859d40 with size: 0.016174 MiB 00:27:50.285 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58081 00:27:50.286 element at address: 0x20002806a6c0 with size: 0.002502 MiB 00:27:50.286 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:27:50.286 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:27:50.286 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58081 00:27:50.286 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:27:50.286 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58081 00:27:50.286 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:27:50.286 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58081 00:27:50.286 element at address: 0x20002806b200 with size: 0.000366 MiB 00:27:50.286 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:27:50.286 15:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:27:50.286 15:53:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58081 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58081 ']' 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58081 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58081 00:27:50.286 killing process with pid 58081 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58081' 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58081 00:27:50.286 15:53:11 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58081 00:27:51.658 ************************************ 00:27:51.658 END TEST dpdk_mem_utility 00:27:51.658 ************************************ 00:27:51.658 00:27:51.658 real 0m2.698s 00:27:51.658 user 0m2.635s 00:27:51.658 sys 0m0.398s 00:27:51.658 15:53:13 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:51.658 15:53:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:27:51.916 15:53:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:51.916 15:53:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:51.916 15:53:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:51.916 15:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:51.916 ************************************ 00:27:51.916 START TEST event 00:27:51.916 ************************************ 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:51.916 * Looking for test storage... 00:27:51.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1691 -- # lcov --version 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:51.916 15:53:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.916 15:53:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.916 15:53:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.916 15:53:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.916 15:53:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.916 15:53:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.916 15:53:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.916 15:53:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.916 15:53:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.916 15:53:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.916 15:53:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.916 15:53:13 event -- scripts/common.sh@344 -- # case "$op" in 00:27:51.916 15:53:13 event -- scripts/common.sh@345 -- # : 1 00:27:51.916 15:53:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.916 15:53:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.916 15:53:13 event -- scripts/common.sh@365 -- # decimal 1 00:27:51.916 15:53:13 event -- scripts/common.sh@353 -- # local d=1 00:27:51.916 15:53:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.916 15:53:13 event -- scripts/common.sh@355 -- # echo 1 00:27:51.916 15:53:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.916 15:53:13 event -- scripts/common.sh@366 -- # decimal 2 00:27:51.916 15:53:13 event -- scripts/common.sh@353 -- # local d=2 00:27:51.916 15:53:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.916 15:53:13 event -- scripts/common.sh@355 -- # echo 2 00:27:51.916 15:53:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.916 15:53:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.916 15:53:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.916 15:53:13 event -- scripts/common.sh@368 -- # return 0 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:51.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.916 --rc genhtml_branch_coverage=1 00:27:51.916 --rc genhtml_function_coverage=1 00:27:51.916 --rc genhtml_legend=1 00:27:51.916 --rc geninfo_all_blocks=1 00:27:51.916 --rc geninfo_unexecuted_blocks=1 00:27:51.916 00:27:51.916 ' 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:51.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.916 --rc genhtml_branch_coverage=1 00:27:51.916 --rc genhtml_function_coverage=1 00:27:51.916 --rc genhtml_legend=1 00:27:51.916 --rc geninfo_all_blocks=1 00:27:51.916 --rc geninfo_unexecuted_blocks=1 00:27:51.916 00:27:51.916 ' 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:51.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.916 --rc genhtml_branch_coverage=1 00:27:51.916 --rc genhtml_function_coverage=1 00:27:51.916 --rc genhtml_legend=1 00:27:51.916 --rc geninfo_all_blocks=1 00:27:51.916 --rc geninfo_unexecuted_blocks=1 00:27:51.916 00:27:51.916 ' 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:51.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.916 --rc genhtml_branch_coverage=1 00:27:51.916 --rc genhtml_function_coverage=1 00:27:51.916 --rc genhtml_legend=1 00:27:51.916 --rc geninfo_all_blocks=1 00:27:51.916 --rc geninfo_unexecuted_blocks=1 00:27:51.916 00:27:51.916 ' 00:27:51.916 15:53:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:51.916 15:53:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:27:51.916 15:53:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:51.916 15:53:13 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:27:51.917 15:53:13 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:51.917 15:53:13 event -- common/autotest_common.sh@10 -- # set +x 00:27:51.917 ************************************ 00:27:51.917 START TEST event_perf 00:27:51.917 ************************************ 00:27:51.917 15:53:13 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:51.917 Running I/O for 1 seconds...[2024-11-05 15:53:13.219887] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:51.917 [2024-11-05 15:53:13.219989] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:27:52.224 [2024-11-05 15:53:13.381263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.224 [2024-11-05 15:53:13.485840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.224 [2024-11-05 15:53:13.486163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.224 [2024-11-05 15:53:13.486230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.224 Running I/O for 1 seconds...[2024-11-05 15:53:13.486230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.650 00:27:53.650 lcore 0: 203218 00:27:53.650 lcore 1: 203218 00:27:53.650 lcore 2: 203220 00:27:53.650 lcore 3: 203218 00:27:53.650 done. 00:27:53.650 ************************************ 00:27:53.650 END TEST event_perf 00:27:53.650 ************************************ 00:27:53.650 00:27:53.650 real 0m1.466s 00:27:53.650 user 0m4.271s 00:27:53.650 sys 0m0.078s 00:27:53.650 15:53:14 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:53.650 15:53:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:27:53.650 15:53:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:53.650 15:53:14 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:53.650 15:53:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:53.650 15:53:14 event -- common/autotest_common.sh@10 -- # set +x 00:27:53.650 ************************************ 00:27:53.650 START TEST event_reactor 00:27:53.650 ************************************ 00:27:53.650 15:53:14 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:53.650 [2024-11-05 15:53:14.715350] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:53.650 [2024-11-05 15:53:14.715476] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58212 ] 00:27:53.650 [2024-11-05 15:53:14.881920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.650 [2024-11-05 15:53:14.980957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.023 test_start 00:27:55.023 oneshot 00:27:55.023 tick 100 00:27:55.023 tick 100 00:27:55.023 tick 250 00:27:55.023 tick 100 00:27:55.023 tick 100 00:27:55.023 tick 250 00:27:55.023 tick 100 00:27:55.023 tick 500 00:27:55.023 tick 100 00:27:55.023 tick 100 00:27:55.023 tick 250 00:27:55.023 tick 100 00:27:55.023 tick 100 00:27:55.023 test_end 00:27:55.023 ************************************ 00:27:55.023 END TEST event_reactor 00:27:55.023 ************************************ 00:27:55.023 00:27:55.023 real 0m1.442s 00:27:55.023 user 0m1.270s 00:27:55.023 sys 0m0.063s 00:27:55.023 15:53:16 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:55.023 15:53:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 15:53:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:55.023 15:53:16 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:55.023 15:53:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:55.023 15:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 ************************************ 00:27:55.023 START TEST event_reactor_perf 00:27:55.023 ************************************ 00:27:55.023 15:53:16 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:55.023 [2024-11-05 15:53:16.198511] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:55.023 [2024-11-05 15:53:16.198938] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58254 ] 00:27:55.023 [2024-11-05 15:53:16.353887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.281 [2024-11-05 15:53:16.453051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.653 test_start 00:27:56.653 test_end 00:27:56.653 Performance: 316862 events per second 00:27:56.653 00:27:56.653 real 0m1.434s 00:27:56.653 user 0m1.265s 00:27:56.653 sys 0m0.061s 00:27:56.653 15:53:17 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:56.653 15:53:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:27:56.653 ************************************ 00:27:56.653 END TEST event_reactor_perf 00:27:56.653 ************************************ 00:27:56.653 15:53:17 event -- event/event.sh@49 -- # uname -s 00:27:56.653 15:53:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:27:56.653 15:53:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:56.653 15:53:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:56.653 15:53:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:56.653 15:53:17 event -- common/autotest_common.sh@10 -- # set +x 00:27:56.653 ************************************ 00:27:56.653 START TEST event_scheduler 00:27:56.653 ************************************ 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:56.653 * Looking for test storage... 00:27:56.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.653 15:53:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.653 15:53:17 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:56.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.653 --rc genhtml_branch_coverage=1 00:27:56.653 --rc genhtml_function_coverage=1 00:27:56.653 --rc genhtml_legend=1 00:27:56.654 --rc geninfo_all_blocks=1 00:27:56.654 --rc geninfo_unexecuted_blocks=1 00:27:56.654 00:27:56.654 ' 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:56.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.654 --rc genhtml_branch_coverage=1 00:27:56.654 --rc genhtml_function_coverage=1 00:27:56.654 --rc genhtml_legend=1 00:27:56.654 --rc geninfo_all_blocks=1 00:27:56.654 --rc geninfo_unexecuted_blocks=1 00:27:56.654 00:27:56.654 ' 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:56.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.654 --rc genhtml_branch_coverage=1 00:27:56.654 --rc genhtml_function_coverage=1 00:27:56.654 --rc genhtml_legend=1 00:27:56.654 --rc geninfo_all_blocks=1 00:27:56.654 --rc geninfo_unexecuted_blocks=1 00:27:56.654 00:27:56.654 ' 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:56.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.654 --rc genhtml_branch_coverage=1 00:27:56.654 --rc genhtml_function_coverage=1 00:27:56.654 --rc genhtml_legend=1 00:27:56.654 --rc geninfo_all_blocks=1 00:27:56.654 --rc geninfo_unexecuted_blocks=1 00:27:56.654 00:27:56.654 ' 00:27:56.654 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:27:56.654 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58319 00:27:56.654 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:27:56.654 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58319 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58319 ']' 00:27:56.654 15:53:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:56.654 15:53:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:56.654 [2024-11-05 15:53:17.847231] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:27:56.654 [2024-11-05 15:53:17.847353] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:27:56.654 [2024-11-05 15:53:18.003012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.911 [2024-11-05 15:53:18.108310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.911 [2024-11-05 15:53:18.108528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.911 [2024-11-05 15:53:18.108874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.911 [2024-11-05 15:53:18.108880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.476 15:53:18 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:57.476 15:53:18 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:27:57.476 15:53:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:27:57.476 15:53:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.476 15:53:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:57.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:57.476 POWER: Cannot set governor of lcore 0 to userspace 00:27:57.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:57.476 POWER: Cannot set governor of lcore 0 to performance 00:27:57.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:57.476 POWER: Cannot set governor of lcore 0 to userspace 00:27:57.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:57.477 POWER: Cannot set governor of lcore 0 to userspace 00:27:57.477 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:27:57.477 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:27:57.477 POWER: Unable to set Power Management Environment for lcore 0 00:27:57.477 [2024-11-05 15:53:18.754380] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:27:57.477 [2024-11-05 15:53:18.754401] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:27:57.477 [2024-11-05 15:53:18.754410] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:27:57.477 [2024-11-05 15:53:18.754426] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:27:57.477 [2024-11-05 15:53:18.754434] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:27:57.477 [2024-11-05 15:53:18.754443] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:27:57.477 15:53:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.477 15:53:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:27:57.477 15:53:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.477 15:53:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:57.735 [2024-11-05 15:53:18.975483] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:27:57.735 15:53:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.735 15:53:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:27:57.735 15:53:18 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:57.735 15:53:18 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:57.735 15:53:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:27:57.735 ************************************ 00:27:57.735 START TEST scheduler_create_thread 00:27:57.735 ************************************ 00:27:57.735 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:27:57.735 15:53:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:27:57.735 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.735 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.735 2 00:27:57.736 15:53:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 3 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 4 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 5 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 6 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 7 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 8 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 9 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 10 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.736 15:53:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:59.110 ************************************ 00:27:59.110 END TEST scheduler_create_thread 00:27:59.110 ************************************ 00:27:59.110 15:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.110 00:27:59.110 real 0m1.175s 00:27:59.110 user 0m0.012s 00:27:59.110 sys 0m0.006s 00:27:59.110 15:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:59.110 15:53:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:27:59.110 15:53:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:59.110 15:53:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58319 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58319 ']' 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58319 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58319 00:27:59.110 killing process with pid 58319 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58319' 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58319 00:27:59.110 15:53:20 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58319 00:27:59.367 [2024-11-05 15:53:20.640707] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:27:59.933 00:27:59.933 real 0m3.642s 00:27:59.933 user 0m6.181s 00:27:59.933 sys 0m0.354s 00:27:59.933 ************************************ 00:27:59.933 END TEST event_scheduler 00:27:59.933 ************************************ 00:27:59.933 15:53:21 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:59.933 15:53:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:00.191 15:53:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:28:00.191 15:53:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:28:00.191 15:53:21 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:00.191 15:53:21 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:00.191 15:53:21 event -- common/autotest_common.sh@10 -- # set +x 00:28:00.191 ************************************ 00:28:00.191 START TEST app_repeat 00:28:00.191 ************************************ 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:28:00.191 Process app_repeat pid: 58414 00:28:00.191 spdk_app_start Round 0 00:28:00.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58414 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58414' 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58414 /var/tmp/spdk-nbd.sock 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58414 ']' 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:00.191 15:53:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:00.191 15:53:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:28:00.191 [2024-11-05 15:53:21.365833] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:00.191 [2024-11-05 15:53:21.365955] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:28:00.191 [2024-11-05 15:53:21.526552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:00.449 [2024-11-05 15:53:21.627490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.449 [2024-11-05 15:53:21.627696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.015 15:53:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:01.015 15:53:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:28:01.015 15:53:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:01.273 Malloc0 00:28:01.273 15:53:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:01.531 Malloc1 00:28:01.531 15:53:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:01.531 15:53:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:01.531 /dev/nbd0 00:28:01.789 15:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:01.789 15:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:01.789 1+0 records in 00:28:01.789 1+0 records out 00:28:01.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162502 s, 25.2 MB/s 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:01.789 15:53:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:28:01.789 15:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:01.789 15:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:01.789 15:53:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:01.789 /dev/nbd1 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:01.789 1+0 records in 00:28:01.789 1+0 records out 00:28:01.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237762 s, 17.2 MB/s 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:01.789 15:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.789 15:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:02.047 { 00:28:02.047 "nbd_device": "/dev/nbd0", 00:28:02.047 "bdev_name": "Malloc0" 00:28:02.047 }, 00:28:02.047 { 00:28:02.047 "nbd_device": "/dev/nbd1", 00:28:02.047 "bdev_name": "Malloc1" 00:28:02.047 } 00:28:02.047 ]' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:02.047 { 00:28:02.047 "nbd_device": "/dev/nbd0", 00:28:02.047 "bdev_name": "Malloc0" 00:28:02.047 }, 00:28:02.047 { 00:28:02.047 "nbd_device": "/dev/nbd1", 00:28:02.047 "bdev_name": "Malloc1" 00:28:02.047 } 00:28:02.047 ]' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:02.047 /dev/nbd1' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:02.047 /dev/nbd1' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:02.047 256+0 records in 00:28:02.047 256+0 records out 00:28:02.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790233 s, 133 MB/s 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:02.047 15:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:02.305 256+0 records in 00:28:02.305 256+0 records out 00:28:02.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184267 s, 56.9 MB/s 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:02.305 256+0 records in 00:28:02.305 256+0 records out 00:28:02.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018071 s, 58.0 MB/s 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:02.305 15:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.593 15:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:02.852 15:53:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:02.852 15:53:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:03.109 15:53:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:04.068 [2024-11-05 15:53:25.166616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:04.068 [2024-11-05 15:53:25.265064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.068 [2024-11-05 15:53:25.265272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.068 [2024-11-05 15:53:25.391811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:04.068 [2024-11-05 15:53:25.392037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:06.607 spdk_app_start Round 1 00:28:06.607 15:53:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:06.607 15:53:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:28:06.607 15:53:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58414 /var/tmp/spdk-nbd.sock 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58414 ']' 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:06.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:06.607 15:53:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:28:06.607 15:53:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:06.607 Malloc0 00:28:06.607 15:53:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:06.865 Malloc1 00:28:06.865 15:53:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:06.865 15:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:07.124 /dev/nbd0 00:28:07.124 15:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:07.124 15:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:07.124 1+0 records in 00:28:07.124 1+0 records out 00:28:07.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262142 s, 15.6 MB/s 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:07.124 15:53:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:28:07.124 15:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.124 15:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:07.124 15:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:07.384 /dev/nbd1 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:07.384 1+0 records in 00:28:07.384 1+0 records out 00:28:07.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003399 s, 12.1 MB/s 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:07.384 15:53:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.384 15:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:07.642 { 00:28:07.642 "nbd_device": "/dev/nbd0", 00:28:07.642 "bdev_name": "Malloc0" 00:28:07.642 }, 00:28:07.642 { 00:28:07.642 "nbd_device": "/dev/nbd1", 00:28:07.642 "bdev_name": "Malloc1" 00:28:07.642 } 00:28:07.642 ]' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:07.642 { 00:28:07.642 "nbd_device": "/dev/nbd0", 00:28:07.642 "bdev_name": "Malloc0" 00:28:07.642 }, 00:28:07.642 { 00:28:07.642 "nbd_device": "/dev/nbd1", 00:28:07.642 "bdev_name": "Malloc1" 00:28:07.642 } 00:28:07.642 ]' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:07.642 /dev/nbd1' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:07.642 /dev/nbd1' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:07.642 256+0 records in 00:28:07.642 256+0 records out 00:28:07.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420023 s, 250 MB/s 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:07.642 256+0 records in 00:28:07.642 256+0 records out 00:28:07.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018316 s, 57.2 MB/s 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:07.642 256+0 records in 00:28:07.642 256+0 records out 00:28:07.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177404 s, 59.1 MB/s 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.642 15:53:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.900 15:53:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:08.158 15:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:08.158 15:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:08.158 15:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:08.158 15:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.159 15:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:08.417 15:53:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:08.417 15:53:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:08.676 15:53:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:09.243 [2024-11-05 15:53:30.420644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:09.243 [2024-11-05 15:53:30.503177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.243 [2024-11-05 15:53:30.503384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.501 [2024-11-05 15:53:30.606959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:09.501 [2024-11-05 15:53:30.607038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:12.128 15:53:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:12.128 spdk_app_start Round 2 00:28:12.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:12.128 15:53:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:28:12.128 15:53:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58414 /var/tmp/spdk-nbd.sock 00:28:12.128 15:53:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58414 ']' 00:28:12.128 15:53:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:12.128 15:53:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:12.128 15:53:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:12.128 15:53:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:12.128 15:53:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:12.128 15:53:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:12.128 15:53:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:28:12.128 15:53:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:12.128 Malloc0 00:28:12.128 15:53:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:12.387 Malloc1 00:28:12.387 15:53:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:12.387 15:53:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:12.646 /dev/nbd0 00:28:12.646 15:53:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:12.646 15:53:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:12.646 1+0 records in 00:28:12.646 1+0 records out 00:28:12.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213026 s, 19.2 MB/s 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:12.646 15:53:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:28:12.646 15:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:12.646 15:53:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:12.646 15:53:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:12.646 /dev/nbd1 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:12.904 1+0 records in 00:28:12.904 1+0 records out 00:28:12.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240958 s, 17.0 MB/s 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:12.904 15:53:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.904 15:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:13.163 { 00:28:13.163 "nbd_device": "/dev/nbd0", 00:28:13.163 "bdev_name": "Malloc0" 00:28:13.163 }, 00:28:13.163 { 00:28:13.163 "nbd_device": "/dev/nbd1", 00:28:13.163 "bdev_name": "Malloc1" 00:28:13.163 } 00:28:13.163 ]' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:13.163 { 00:28:13.163 "nbd_device": "/dev/nbd0", 00:28:13.163 "bdev_name": "Malloc0" 00:28:13.163 }, 00:28:13.163 { 00:28:13.163 "nbd_device": "/dev/nbd1", 00:28:13.163 "bdev_name": "Malloc1" 00:28:13.163 } 00:28:13.163 ]' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:13.163 /dev/nbd1' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:13.163 /dev/nbd1' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:13.163 256+0 records in 00:28:13.163 256+0 records out 00:28:13.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789137 s, 133 MB/s 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:13.163 15:53:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:13.163 256+0 records in 00:28:13.163 256+0 records out 00:28:13.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162615 s, 64.5 MB/s 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:13.164 256+0 records in 00:28:13.164 256+0 records out 00:28:13.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302106 s, 34.7 MB/s 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.164 15:53:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.452 15:53:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:13.710 15:53:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:13.711 15:53:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.711 15:53:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:13.711 15:53:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:13.711 15:53:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:13.970 15:53:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:13.970 15:53:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:14.228 15:53:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:14.795 [2024-11-05 15:53:36.105335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:15.054 [2024-11-05 15:53:36.191323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.054 [2024-11-05 15:53:36.191603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.054 [2024-11-05 15:53:36.296558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:15.054 [2024-11-05 15:53:36.296621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:17.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:17.585 15:53:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58414 /var/tmp/spdk-nbd.sock 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58414 ']' 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:28:17.585 15:53:38 event.app_repeat -- event/event.sh@39 -- # killprocess 58414 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58414 ']' 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58414 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58414 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:17.585 killing process with pid 58414 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58414' 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58414 00:28:17.585 15:53:38 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58414 00:28:18.152 spdk_app_start is called in Round 0. 00:28:18.153 Shutdown signal received, stop current app iteration 00:28:18.153 Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 reinitialization... 00:28:18.153 spdk_app_start is called in Round 1. 00:28:18.153 Shutdown signal received, stop current app iteration 00:28:18.153 Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 reinitialization... 00:28:18.153 spdk_app_start is called in Round 2. 00:28:18.153 Shutdown signal received, stop current app iteration 00:28:18.153 Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 reinitialization... 00:28:18.153 spdk_app_start is called in Round 3. 00:28:18.153 Shutdown signal received, stop current app iteration 00:28:18.153 ************************************ 00:28:18.153 END TEST app_repeat 00:28:18.153 ************************************ 00:28:18.153 15:53:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:28:18.153 15:53:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:28:18.153 00:28:18.153 real 0m17.934s 00:28:18.153 user 0m39.317s 00:28:18.153 sys 0m2.095s 00:28:18.153 15:53:39 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:18.153 15:53:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:18.153 15:53:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:28:18.153 15:53:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:18.153 15:53:39 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:18.153 15:53:39 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:18.153 15:53:39 event -- common/autotest_common.sh@10 -- # set +x 00:28:18.153 ************************************ 00:28:18.153 START TEST cpu_locks 00:28:18.153 ************************************ 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:18.153 * Looking for test storage... 00:28:18.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.153 15:53:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:18.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.153 --rc genhtml_branch_coverage=1 00:28:18.153 --rc genhtml_function_coverage=1 00:28:18.153 --rc genhtml_legend=1 00:28:18.153 --rc geninfo_all_blocks=1 00:28:18.153 --rc geninfo_unexecuted_blocks=1 00:28:18.153 00:28:18.153 ' 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:18.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.153 --rc genhtml_branch_coverage=1 00:28:18.153 --rc genhtml_function_coverage=1 00:28:18.153 --rc genhtml_legend=1 00:28:18.153 --rc geninfo_all_blocks=1 00:28:18.153 --rc geninfo_unexecuted_blocks=1 00:28:18.153 00:28:18.153 ' 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:18.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.153 --rc genhtml_branch_coverage=1 00:28:18.153 --rc genhtml_function_coverage=1 00:28:18.153 --rc genhtml_legend=1 00:28:18.153 --rc geninfo_all_blocks=1 00:28:18.153 --rc geninfo_unexecuted_blocks=1 00:28:18.153 00:28:18.153 ' 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:18.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.153 --rc genhtml_branch_coverage=1 00:28:18.153 --rc genhtml_function_coverage=1 00:28:18.153 --rc genhtml_legend=1 00:28:18.153 --rc geninfo_all_blocks=1 00:28:18.153 --rc geninfo_unexecuted_blocks=1 00:28:18.153 00:28:18.153 ' 00:28:18.153 15:53:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:28:18.153 15:53:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:28:18.153 15:53:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:28:18.153 15:53:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:18.153 15:53:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:18.153 ************************************ 00:28:18.153 START TEST default_locks 00:28:18.153 ************************************ 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58839 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58839 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58839 ']' 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.153 15:53:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:18.471 [2024-11-05 15:53:39.522324] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:18.471 [2024-11-05 15:53:39.522451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58839 ] 00:28:18.471 [2024-11-05 15:53:39.685148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.471 [2024-11-05 15:53:39.789351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.037 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:19.037 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:28:19.037 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58839 00:28:19.037 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58839 00:28:19.037 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58839 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58839 ']' 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58839 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58839 00:28:19.296 killing process with pid 58839 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58839' 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58839 00:28:19.296 15:53:40 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58839 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58839 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58839 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:28:21.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.224 ERROR: process (pid: 58839) is no longer running 00:28:21.224 ************************************ 00:28:21.224 END TEST default_locks 00:28:21.224 ************************************ 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58839 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58839 ']' 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:21.224 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58839) - No such process 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:21.224 00:28:21.224 real 0m2.630s 00:28:21.224 user 0m2.636s 00:28:21.224 sys 0m0.418s 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:21.224 15:53:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:21.224 15:53:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:28:21.224 15:53:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:21.224 15:53:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:21.224 15:53:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:21.224 ************************************ 00:28:21.224 START TEST default_locks_via_rpc 00:28:21.224 ************************************ 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58903 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58903 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58903 ']' 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:21.224 15:53:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:21.224 [2024-11-05 15:53:42.194665] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:21.224 [2024-11-05 15:53:42.195005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58903 ] 00:28:21.224 [2024-11-05 15:53:42.349959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.224 [2024-11-05 15:53:42.433855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58903 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58903 00:28:21.790 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58903 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58903 ']' 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58903 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58903 00:28:22.048 killing process with pid 58903 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58903' 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58903 00:28:22.048 15:53:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58903 00:28:23.419 00:28:23.419 real 0m2.414s 00:28:23.419 user 0m2.466s 00:28:23.419 sys 0m0.426s 00:28:23.419 ************************************ 00:28:23.419 END TEST default_locks_via_rpc 00:28:23.419 ************************************ 00:28:23.419 15:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:23.419 15:53:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:23.419 15:53:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:28:23.419 15:53:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:23.419 15:53:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:23.419 15:53:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:23.419 ************************************ 00:28:23.419 START TEST non_locking_app_on_locked_coremask 00:28:23.419 ************************************ 00:28:23.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58955 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58955 /var/tmp/spdk.sock 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58955 ']' 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:23.419 15:53:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:23.419 [2024-11-05 15:53:44.638657] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:23.419 [2024-11-05 15:53:44.639017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:28:23.676 [2024-11-05 15:53:44.804194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.676 [2024-11-05 15:53:44.940546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58971 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58971 /var/tmp/spdk2.sock 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58971 ']' 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:24.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.242 15:53:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:24.242 [2024-11-05 15:53:45.542759] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:24.242 [2024-11-05 15:53:45.543150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:28:24.500 [2024-11-05 15:53:45.706125] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:24.500 [2024-11-05 15:53:45.706179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.758 [2024-11-05 15:53:45.886346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.693 15:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.693 15:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:28:25.693 15:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58955 00:28:25.693 15:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58955 00:28:25.693 15:53:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58955 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58955 ']' 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58955 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58955 00:28:25.951 killing process with pid 58955 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58955' 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58955 00:28:25.951 15:53:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58955 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58971 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58971 ']' 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58971 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58971 00:28:28.513 killing process with pid 58971 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58971' 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58971 00:28:28.513 15:53:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58971 00:28:29.884 00:28:29.884 real 0m6.358s 00:28:29.884 user 0m6.595s 00:28:29.884 sys 0m0.777s 00:28:29.884 15:53:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:29.884 ************************************ 00:28:29.884 END TEST non_locking_app_on_locked_coremask 00:28:29.884 ************************************ 00:28:29.884 15:53:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 15:53:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:28:29.884 15:53:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:29.884 15:53:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:29.884 15:53:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 ************************************ 00:28:29.884 START TEST locking_app_on_unlocked_coremask 00:28:29.884 ************************************ 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:28:29.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59072 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59072 /var/tmp/spdk.sock 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59072 ']' 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:29.884 15:53:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:28:29.884 [2024-11-05 15:53:51.036437] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:29.884 [2024-11-05 15:53:51.036562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:28:29.884 [2024-11-05 15:53:51.197760] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:29.884 [2024-11-05 15:53:51.197820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.142 [2024-11-05 15:53:51.298264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:30.724 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.724 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:28:30.724 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59084 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59084 /var/tmp/spdk2.sock 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59084 ']' 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.725 15:53:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:30.725 [2024-11-05 15:53:52.007429] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:30.725 [2024-11-05 15:53:52.007747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:28:30.983 [2024-11-05 15:53:52.174250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.244 [2024-11-05 15:53:52.377915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.177 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:32.177 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:28:32.177 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59084 00:28:32.177 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59084 00:28:32.177 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59072 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59072 ']' 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59072 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59072 00:28:32.742 killing process with pid 59072 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59072' 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59072 00:28:32.742 15:53:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59072 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59084 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59084 ']' 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59084 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59084 00:28:36.023 killing process with pid 59084 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59084' 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59084 00:28:36.023 15:53:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59084 00:28:36.587 00:28:36.587 real 0m6.938s 00:28:36.587 user 0m7.185s 00:28:36.587 sys 0m0.822s 00:28:36.587 15:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:36.588 ************************************ 00:28:36.588 END TEST locking_app_on_unlocked_coremask 00:28:36.588 ************************************ 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:36.588 15:53:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:28:36.588 15:53:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:36.588 15:53:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:36.588 15:53:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:36.588 ************************************ 00:28:36.588 START TEST locking_app_on_locked_coremask 00:28:36.588 ************************************ 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59187 00:28:36.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59187 /var/tmp/spdk.sock 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59187 ']' 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:36.588 15:53:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:36.845 [2024-11-05 15:53:58.009080] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:36.845 [2024-11-05 15:53:58.009765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59187 ] 00:28:36.845 [2024-11-05 15:53:58.165622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.104 [2024-11-05 15:53:58.253308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59197 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59197 /var/tmp/spdk2.sock 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59197 /var/tmp/spdk2.sock 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59197 /var/tmp/spdk2.sock 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59197 ']' 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:37.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:37.671 15:53:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:37.671 [2024-11-05 15:53:58.902515] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:37.671 [2024-11-05 15:53:58.902631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59197 ] 00:28:37.929 [2024-11-05 15:53:59.073167] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59187 has claimed it. 00:28:37.929 [2024-11-05 15:53:59.073251] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:38.514 ERROR: process (pid: 59197) is no longer running 00:28:38.514 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59197) - No such process 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59187 ']' 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:38.514 killing process with pid 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59187' 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59187 00:28:38.514 15:53:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59187 00:28:39.890 00:28:39.890 real 0m3.115s 00:28:39.890 user 0m3.333s 00:28:39.890 sys 0m0.542s 00:28:39.890 15:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:39.890 15:54:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:39.890 ************************************ 00:28:39.890 END TEST locking_app_on_locked_coremask 00:28:39.890 ************************************ 00:28:39.890 15:54:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:28:39.890 15:54:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:39.890 15:54:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:39.890 15:54:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:39.890 ************************************ 00:28:39.890 START TEST locking_overlapped_coremask 00:28:39.890 ************************************ 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:28:39.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59256 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59256 /var/tmp/spdk.sock 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59256 ']' 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.890 15:54:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:39.890 [2024-11-05 15:54:01.153888] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:39.890 [2024-11-05 15:54:01.153992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:28:40.148 [2024-11-05 15:54:01.303227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.148 [2024-11-05 15:54:01.394775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.148 [2024-11-05 15:54:01.394860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.148 [2024-11-05 15:54:01.394905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59274 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59274 /var/tmp/spdk2.sock 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59274 /var/tmp/spdk2.sock 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59274 /var/tmp/spdk2.sock 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59274 ']' 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:40.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:40.728 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:40.728 [2024-11-05 15:54:02.086388] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:40.728 [2024-11-05 15:54:02.086507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59274 ] 00:28:40.987 [2024-11-05 15:54:02.252113] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59256 has claimed it. 00:28:40.987 [2024-11-05 15:54:02.252179] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:41.552 ERROR: process (pid: 59274) is no longer running 00:28:41.552 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59274) - No such process 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59256 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59256 ']' 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59256 00:28:41.552 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59256 00:28:41.553 killing process with pid 59256 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59256' 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59256 00:28:41.553 15:54:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59256 00:28:42.942 00:28:42.942 real 0m2.983s 00:28:42.942 user 0m8.240s 00:28:42.942 sys 0m0.402s 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:42.942 ************************************ 00:28:42.942 END TEST locking_overlapped_coremask 00:28:42.942 ************************************ 00:28:42.942 15:54:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:28:42.942 15:54:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:42.942 15:54:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.942 15:54:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:42.942 ************************************ 00:28:42.942 START TEST locking_overlapped_coremask_via_rpc 00:28:42.942 ************************************ 00:28:42.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59327 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59327 /var/tmp/spdk.sock 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59327 ']' 00:28:42.942 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.943 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:28:42.943 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.943 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.943 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.943 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:42.943 [2024-11-05 15:54:04.181261] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:42.943 [2024-11-05 15:54:04.181383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:28:43.201 [2024-11-05 15:54:04.335139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:43.201 [2024-11-05 15:54:04.335184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.201 [2024-11-05 15:54:04.422517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.201 [2024-11-05 15:54:04.422828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.201 [2024-11-05 15:54:04.422876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59339 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59339 /var/tmp/spdk2.sock 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59339 ']' 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:43.767 15:54:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:43.767 [2024-11-05 15:54:05.016053] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:43.767 [2024-11-05 15:54:05.016512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:28:44.024 [2024-11-05 15:54:05.193955] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:44.024 [2024-11-05 15:54:05.194015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.281 [2024-11-05 15:54:05.402123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.281 [2024-11-05 15:54:05.402185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.281 [2024-11-05 15:54:05.402212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.224 [2024-11-05 15:54:06.566889] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59327 has claimed it. 00:28:45.224 request: 00:28:45.224 { 00:28:45.224 "method": "framework_enable_cpumask_locks", 00:28:45.224 "req_id": 1 00:28:45.224 } 00:28:45.224 Got JSON-RPC error response 00:28:45.224 response: 00:28:45.224 { 00:28:45.224 "code": -32603, 00:28:45.224 "message": "Failed to claim CPU core: 2" 00:28:45.224 } 00:28:45.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59327 /var/tmp/spdk.sock 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59327 ']' 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.224 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59339 /var/tmp/spdk2.sock 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59339 ']' 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.481 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.740 ************************************ 00:28:45.740 END TEST locking_overlapped_coremask_via_rpc 00:28:45.740 ************************************ 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:45.740 00:28:45.740 real 0m2.871s 00:28:45.740 user 0m0.922s 00:28:45.740 sys 0m0.115s 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:45.740 15:54:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.740 15:54:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:28:45.740 15:54:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59327 ]] 00:28:45.740 15:54:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59327 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59327 ']' 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59327 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59327 00:28:45.740 killing process with pid 59327 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59327' 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59327 00:28:45.740 15:54:07 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59327 00:28:47.640 15:54:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59339 ]] 00:28:47.640 15:54:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59339 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59339 ']' 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59339 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59339 00:28:47.640 killing process with pid 59339 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59339' 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59339 00:28:47.640 15:54:08 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59339 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:48.572 Process with pid 59327 is not found 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59327 ]] 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59327 00:28:48.572 15:54:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59327 ']' 00:28:48.572 15:54:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59327 00:28:48.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59327) - No such process 00:28:48.572 15:54:09 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59327 is not found' 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59339 ]] 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59339 00:28:48.572 Process with pid 59339 is not found 00:28:48.572 15:54:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59339 ']' 00:28:48.572 15:54:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59339 00:28:48.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59339) - No such process 00:28:48.572 15:54:09 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59339 is not found' 00:28:48.572 15:54:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:48.572 ************************************ 00:28:48.572 END TEST cpu_locks 00:28:48.572 ************************************ 00:28:48.572 00:28:48.573 real 0m30.502s 00:28:48.573 user 0m53.045s 00:28:48.573 sys 0m4.273s 00:28:48.573 15:54:09 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:48.573 15:54:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:48.573 ************************************ 00:28:48.573 END TEST event 00:28:48.573 ************************************ 00:28:48.573 00:28:48.573 real 0m56.775s 00:28:48.573 user 1m45.500s 00:28:48.573 sys 0m7.125s 00:28:48.573 15:54:09 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:48.573 15:54:09 event -- common/autotest_common.sh@10 -- # set +x 00:28:48.573 15:54:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:48.573 15:54:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:48.573 15:54:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:48.573 15:54:09 -- common/autotest_common.sh@10 -- # set +x 00:28:48.573 ************************************ 00:28:48.573 START TEST thread 00:28:48.573 ************************************ 00:28:48.573 15:54:09 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:48.573 * Looking for test storage... 00:28:48.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:28:48.573 15:54:09 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:48.573 15:54:09 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:28:48.573 15:54:09 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:48.831 15:54:09 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:48.831 15:54:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.831 15:54:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.831 15:54:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.831 15:54:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.831 15:54:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.831 15:54:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.831 15:54:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.831 15:54:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.831 15:54:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.831 15:54:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.831 15:54:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.831 15:54:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:28:48.831 15:54:09 thread -- scripts/common.sh@345 -- # : 1 00:28:48.831 15:54:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.831 15:54:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.831 15:54:09 thread -- scripts/common.sh@365 -- # decimal 1 00:28:48.831 15:54:10 thread -- scripts/common.sh@353 -- # local d=1 00:28:48.831 15:54:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.831 15:54:10 thread -- scripts/common.sh@355 -- # echo 1 00:28:48.831 15:54:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.831 15:54:10 thread -- scripts/common.sh@366 -- # decimal 2 00:28:48.831 15:54:10 thread -- scripts/common.sh@353 -- # local d=2 00:28:48.831 15:54:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.831 15:54:10 thread -- scripts/common.sh@355 -- # echo 2 00:28:48.831 15:54:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.831 15:54:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.831 15:54:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.831 15:54:10 thread -- scripts/common.sh@368 -- # return 0 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:48.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.831 --rc genhtml_branch_coverage=1 00:28:48.831 --rc genhtml_function_coverage=1 00:28:48.831 --rc genhtml_legend=1 00:28:48.831 --rc geninfo_all_blocks=1 00:28:48.831 --rc geninfo_unexecuted_blocks=1 00:28:48.831 00:28:48.831 ' 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:48.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.831 --rc genhtml_branch_coverage=1 00:28:48.831 --rc genhtml_function_coverage=1 00:28:48.831 --rc genhtml_legend=1 00:28:48.831 --rc geninfo_all_blocks=1 00:28:48.831 --rc geninfo_unexecuted_blocks=1 00:28:48.831 00:28:48.831 ' 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:48.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.831 --rc genhtml_branch_coverage=1 00:28:48.831 --rc genhtml_function_coverage=1 00:28:48.831 --rc genhtml_legend=1 00:28:48.831 --rc geninfo_all_blocks=1 00:28:48.831 --rc geninfo_unexecuted_blocks=1 00:28:48.831 00:28:48.831 ' 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:48.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.831 --rc genhtml_branch_coverage=1 00:28:48.831 --rc genhtml_function_coverage=1 00:28:48.831 --rc genhtml_legend=1 00:28:48.831 --rc geninfo_all_blocks=1 00:28:48.831 --rc geninfo_unexecuted_blocks=1 00:28:48.831 00:28:48.831 ' 00:28:48.831 15:54:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:48.831 15:54:10 thread -- common/autotest_common.sh@10 -- # set +x 00:28:48.831 ************************************ 00:28:48.831 START TEST thread_poller_perf 00:28:48.831 ************************************ 00:28:48.831 15:54:10 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:48.831 [2024-11-05 15:54:10.044558] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:48.831 [2024-11-05 15:54:10.044781] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:28:49.088 [2024-11-05 15:54:10.205385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.088 [2024-11-05 15:54:10.306473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.088 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:28:50.460 [2024-11-05T15:54:11.822Z] ====================================== 00:28:50.460 [2024-11-05T15:54:11.822Z] busy:2608934314 (cyc) 00:28:50.460 [2024-11-05T15:54:11.822Z] total_run_count: 305000 00:28:50.460 [2024-11-05T15:54:11.822Z] tsc_hz: 2600000000 (cyc) 00:28:50.460 [2024-11-05T15:54:11.822Z] ====================================== 00:28:50.460 [2024-11-05T15:54:11.822Z] poller_cost: 8553 (cyc), 3289 (nsec) 00:28:50.460 00:28:50.460 real 0m1.453s 00:28:50.460 user 0m1.283s 00:28:50.460 sys 0m0.062s 00:28:50.460 15:54:11 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:50.460 15:54:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:50.460 ************************************ 00:28:50.460 END TEST thread_poller_perf 00:28:50.460 ************************************ 00:28:50.460 15:54:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:50.460 15:54:11 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:28:50.460 15:54:11 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:50.460 15:54:11 thread -- common/autotest_common.sh@10 -- # set +x 00:28:50.460 ************************************ 00:28:50.460 START TEST thread_poller_perf 00:28:50.460 ************************************ 00:28:50.460 15:54:11 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:50.460 [2024-11-05 15:54:11.536011] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:50.460 [2024-11-05 15:54:11.536242] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:28:50.460 [2024-11-05 15:54:11.693818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.460 [2024-11-05 15:54:11.779203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.460 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:28:51.831 [2024-11-05T15:54:13.193Z] ====================================== 00:28:51.831 [2024-11-05T15:54:13.193Z] busy:2602622170 (cyc) 00:28:51.831 [2024-11-05T15:54:13.193Z] total_run_count: 5101000 00:28:51.831 [2024-11-05T15:54:13.193Z] tsc_hz: 2600000000 (cyc) 00:28:51.831 [2024-11-05T15:54:13.193Z] ====================================== 00:28:51.831 [2024-11-05T15:54:13.193Z] poller_cost: 510 (cyc), 196 (nsec) 00:28:51.831 ************************************ 00:28:51.831 END TEST thread_poller_perf 00:28:51.831 ************************************ 00:28:51.831 00:28:51.831 real 0m1.398s 00:28:51.831 user 0m1.228s 00:28:51.831 sys 0m0.063s 00:28:51.831 15:54:12 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:51.831 15:54:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:51.831 15:54:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:28:51.831 00:28:51.831 real 0m3.075s 00:28:51.831 user 0m2.626s 00:28:51.831 sys 0m0.237s 00:28:51.831 ************************************ 00:28:51.831 END TEST thread 00:28:51.831 ************************************ 00:28:51.831 15:54:12 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:51.831 15:54:12 thread -- common/autotest_common.sh@10 -- # set +x 00:28:51.831 15:54:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:28:51.831 15:54:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:51.831 15:54:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:51.831 15:54:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:51.831 15:54:12 -- common/autotest_common.sh@10 -- # set +x 00:28:51.831 ************************************ 00:28:51.831 START TEST app_cmdline 00:28:51.831 ************************************ 00:28:51.831 15:54:12 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:51.831 * Looking for test storage... 00:28:51.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:28:51.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.831 15:54:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.831 --rc genhtml_branch_coverage=1 00:28:51.831 --rc genhtml_function_coverage=1 00:28:51.831 --rc genhtml_legend=1 00:28:51.831 --rc geninfo_all_blocks=1 00:28:51.831 --rc geninfo_unexecuted_blocks=1 00:28:51.831 00:28:51.831 ' 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.831 --rc genhtml_branch_coverage=1 00:28:51.831 --rc genhtml_function_coverage=1 00:28:51.831 --rc genhtml_legend=1 00:28:51.831 --rc geninfo_all_blocks=1 00:28:51.831 --rc geninfo_unexecuted_blocks=1 00:28:51.831 00:28:51.831 ' 00:28:51.831 15:54:13 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.831 --rc genhtml_branch_coverage=1 00:28:51.831 --rc genhtml_function_coverage=1 00:28:51.831 --rc genhtml_legend=1 00:28:51.832 --rc geninfo_all_blocks=1 00:28:51.832 --rc geninfo_unexecuted_blocks=1 00:28:51.832 00:28:51.832 ' 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:51.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.832 --rc genhtml_branch_coverage=1 00:28:51.832 --rc genhtml_function_coverage=1 00:28:51.832 --rc genhtml_legend=1 00:28:51.832 --rc geninfo_all_blocks=1 00:28:51.832 --rc geninfo_unexecuted_blocks=1 00:28:51.832 00:28:51.832 ' 00:28:51.832 15:54:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:28:51.832 15:54:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59619 00:28:51.832 15:54:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59619 00:28:51.832 15:54:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59619 ']' 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.832 15:54:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:51.832 [2024-11-05 15:54:13.186568] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:51.832 [2024-11-05 15:54:13.186686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59619 ] 00:28:52.089 [2024-11-05 15:54:13.344655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.089 [2024-11-05 15:54:13.445852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:28:53.023 { 00:28:53.023 "version": "SPDK v25.01-pre git sha1 eca0d2cd8", 00:28:53.023 "fields": { 00:28:53.023 "major": 25, 00:28:53.023 "minor": 1, 00:28:53.023 "patch": 0, 00:28:53.023 "suffix": "-pre", 00:28:53.023 "commit": "eca0d2cd8" 00:28:53.023 } 00:28:53.023 } 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:28:53.023 15:54:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:53.023 15:54:14 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:53.280 request: 00:28:53.280 { 00:28:53.280 "method": "env_dpdk_get_mem_stats", 00:28:53.280 "req_id": 1 00:28:53.280 } 00:28:53.280 Got JSON-RPC error response 00:28:53.280 response: 00:28:53.280 { 00:28:53.280 "code": -32601, 00:28:53.280 "message": "Method not found" 00:28:53.280 } 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:53.280 15:54:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59619 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59619 ']' 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59619 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59619 00:28:53.280 killing process with pid 59619 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59619' 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@971 -- # kill 59619 00:28:53.280 15:54:14 app_cmdline -- common/autotest_common.sh@976 -- # wait 59619 00:28:54.850 00:28:54.850 real 0m3.035s 00:28:54.850 user 0m3.327s 00:28:54.850 sys 0m0.442s 00:28:54.850 15:54:16 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:54.850 ************************************ 00:28:54.850 END TEST app_cmdline 00:28:54.850 ************************************ 00:28:54.850 15:54:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:54.850 15:54:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:54.851 15:54:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:54.851 15:54:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:54.851 15:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:54.851 ************************************ 00:28:54.851 START TEST version 00:28:54.851 ************************************ 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:54.851 * Looking for test storage... 00:28:54.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1691 -- # lcov --version 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:54.851 15:54:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.851 15:54:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.851 15:54:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.851 15:54:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.851 15:54:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.851 15:54:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.851 15:54:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.851 15:54:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.851 15:54:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.851 15:54:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.851 15:54:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.851 15:54:16 version -- scripts/common.sh@344 -- # case "$op" in 00:28:54.851 15:54:16 version -- scripts/common.sh@345 -- # : 1 00:28:54.851 15:54:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.851 15:54:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.851 15:54:16 version -- scripts/common.sh@365 -- # decimal 1 00:28:54.851 15:54:16 version -- scripts/common.sh@353 -- # local d=1 00:28:54.851 15:54:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.851 15:54:16 version -- scripts/common.sh@355 -- # echo 1 00:28:54.851 15:54:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.851 15:54:16 version -- scripts/common.sh@366 -- # decimal 2 00:28:54.851 15:54:16 version -- scripts/common.sh@353 -- # local d=2 00:28:54.851 15:54:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.851 15:54:16 version -- scripts/common.sh@355 -- # echo 2 00:28:54.851 15:54:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.851 15:54:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.851 15:54:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.851 15:54:16 version -- scripts/common.sh@368 -- # return 0 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:54.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.851 --rc genhtml_branch_coverage=1 00:28:54.851 --rc genhtml_function_coverage=1 00:28:54.851 --rc genhtml_legend=1 00:28:54.851 --rc geninfo_all_blocks=1 00:28:54.851 --rc geninfo_unexecuted_blocks=1 00:28:54.851 00:28:54.851 ' 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:54.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.851 --rc genhtml_branch_coverage=1 00:28:54.851 --rc genhtml_function_coverage=1 00:28:54.851 --rc genhtml_legend=1 00:28:54.851 --rc geninfo_all_blocks=1 00:28:54.851 --rc geninfo_unexecuted_blocks=1 00:28:54.851 00:28:54.851 ' 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:54.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.851 --rc genhtml_branch_coverage=1 00:28:54.851 --rc genhtml_function_coverage=1 00:28:54.851 --rc genhtml_legend=1 00:28:54.851 --rc geninfo_all_blocks=1 00:28:54.851 --rc geninfo_unexecuted_blocks=1 00:28:54.851 00:28:54.851 ' 00:28:54.851 15:54:16 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:54.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.851 --rc genhtml_branch_coverage=1 00:28:54.851 --rc genhtml_function_coverage=1 00:28:54.851 --rc genhtml_legend=1 00:28:54.851 --rc geninfo_all_blocks=1 00:28:54.851 --rc geninfo_unexecuted_blocks=1 00:28:54.851 00:28:54.851 ' 00:28:54.851 15:54:16 version -- app/version.sh@17 -- # get_header_version major 00:28:54.851 15:54:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # cut -f2 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # tr -d '"' 00:28:54.851 15:54:16 version -- app/version.sh@17 -- # major=25 00:28:54.851 15:54:16 version -- app/version.sh@18 -- # get_header_version minor 00:28:54.851 15:54:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # tr -d '"' 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # cut -f2 00:28:54.851 15:54:16 version -- app/version.sh@18 -- # minor=1 00:28:54.851 15:54:16 version -- app/version.sh@19 -- # get_header_version patch 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # cut -f2 00:28:54.851 15:54:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # tr -d '"' 00:28:54.851 15:54:16 version -- app/version.sh@19 -- # patch=0 00:28:54.851 15:54:16 version -- app/version.sh@20 -- # get_header_version suffix 00:28:54.851 15:54:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # cut -f2 00:28:54.851 15:54:16 version -- app/version.sh@14 -- # tr -d '"' 00:28:55.109 15:54:16 version -- app/version.sh@20 -- # suffix=-pre 00:28:55.109 15:54:16 version -- app/version.sh@22 -- # version=25.1 00:28:55.109 15:54:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:28:55.109 15:54:16 version -- app/version.sh@28 -- # version=25.1rc0 00:28:55.109 15:54:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:55.109 15:54:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:28:55.109 15:54:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:28:55.109 15:54:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:28:55.109 00:28:55.109 real 0m0.198s 00:28:55.109 user 0m0.127s 00:28:55.109 sys 0m0.096s 00:28:55.109 ************************************ 00:28:55.109 END TEST version 00:28:55.109 ************************************ 00:28:55.109 15:54:16 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:55.109 15:54:16 version -- common/autotest_common.sh@10 -- # set +x 00:28:55.109 15:54:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:28:55.109 15:54:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:28:55.109 15:54:16 -- spdk/autotest.sh@194 -- # uname -s 00:28:55.109 15:54:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:28:55.109 15:54:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:55.109 15:54:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:55.109 15:54:16 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:28:55.109 15:54:16 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:55.109 15:54:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:55.109 15:54:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:55.109 15:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:55.109 ************************************ 00:28:55.109 START TEST blockdev_nvme 00:28:55.109 ************************************ 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:55.109 * Looking for test storage... 00:28:55.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.109 15:54:16 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.109 --rc genhtml_branch_coverage=1 00:28:55.109 --rc genhtml_function_coverage=1 00:28:55.109 --rc genhtml_legend=1 00:28:55.109 --rc geninfo_all_blocks=1 00:28:55.109 --rc geninfo_unexecuted_blocks=1 00:28:55.109 00:28:55.109 ' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.109 --rc genhtml_branch_coverage=1 00:28:55.109 --rc genhtml_function_coverage=1 00:28:55.109 --rc genhtml_legend=1 00:28:55.109 --rc geninfo_all_blocks=1 00:28:55.109 --rc geninfo_unexecuted_blocks=1 00:28:55.109 00:28:55.109 ' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.109 --rc genhtml_branch_coverage=1 00:28:55.109 --rc genhtml_function_coverage=1 00:28:55.109 --rc genhtml_legend=1 00:28:55.109 --rc geninfo_all_blocks=1 00:28:55.109 --rc geninfo_unexecuted_blocks=1 00:28:55.109 00:28:55.109 ' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.109 --rc genhtml_branch_coverage=1 00:28:55.109 --rc genhtml_function_coverage=1 00:28:55.109 --rc genhtml_legend=1 00:28:55.109 --rc geninfo_all_blocks=1 00:28:55.109 --rc geninfo_unexecuted_blocks=1 00:28:55.109 00:28:55.109 ' 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:55.109 15:54:16 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:28:55.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59797 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59797 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59797 ']' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:55.109 15:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:55.109 15:54:16 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:55.367 [2024-11-05 15:54:16.504806] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:55.367 [2024-11-05 15:54:16.505063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59797 ] 00:28:55.367 [2024-11-05 15:54:16.665229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.624 [2024-11-05 15:54:16.765000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.190 15:54:17 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:56.190 15:54:17 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:28:56.190 15:54:17 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:28:56.190 15:54:17 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:28:56.190 15:54:17 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:28:56.190 15:54:17 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:28:56.190 15:54:17 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:56.190 15:54:17 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:28:56.190 15:54:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.190 15:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:56.449 15:54:17 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:28:56.449 15:54:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "10524084-cdac-460d-945d-dbcaa1decc5a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "10524084-cdac-460d-945d-dbcaa1decc5a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d636d0ab-cebb-40ba-a77d-3037b81381a6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d636d0ab-cebb-40ba-a77d-3037b81381a6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6595771b-a688-4025-81c1-4b0acad349c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6595771b-a688-4025-81c1-4b0acad349c6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f2099dad-db11-49b0-969d-112e244ceba5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f2099dad-db11-49b0-969d-112e244ceba5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fe39223c-3ae2-403b-a824-38d61ec2c567"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe39223c-3ae2-403b-a824-38d61ec2c567",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "89fa474c-ef7b-45f1-bfd8-1440b1103f49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "89fa474c-ef7b-45f1-bfd8-1440b1103f49",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:56.707 15:54:17 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:28:56.707 15:54:17 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:28:56.707 15:54:17 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:28:56.707 15:54:17 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59797 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59797 ']' 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59797 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59797 00:28:56.707 killing process with pid 59797 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59797' 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59797 00:28:56.707 15:54:17 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59797 00:28:58.081 15:54:19 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:58.081 15:54:19 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:58.081 15:54:19 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:28:58.081 15:54:19 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:58.081 15:54:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:58.081 ************************************ 00:28:58.081 START TEST bdev_hello_world 00:28:58.081 ************************************ 00:28:58.081 15:54:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:58.081 [2024-11-05 15:54:19.406526] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:58.081 [2024-11-05 15:54:19.406646] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:28:58.339 [2024-11-05 15:54:19.566053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.339 [2024-11-05 15:54:19.664811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.906 [2024-11-05 15:54:20.198236] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:58.906 [2024-11-05 15:54:20.198290] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:58.906 [2024-11-05 15:54:20.198313] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:58.906 [2024-11-05 15:54:20.200841] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:58.906 [2024-11-05 15:54:20.201246] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:58.906 [2024-11-05 15:54:20.201274] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:58.906 [2024-11-05 15:54:20.201390] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:58.906 00:28:58.906 [2024-11-05 15:54:20.201411] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:59.839 00:28:59.839 real 0m1.567s 00:28:59.839 user 0m1.296s 00:28:59.839 sys 0m0.164s 00:28:59.839 ************************************ 00:28:59.839 END TEST bdev_hello_world 00:28:59.839 ************************************ 00:28:59.839 15:54:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.839 15:54:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:59.839 15:54:20 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:28:59.839 15:54:20 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:59.839 15:54:20 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:59.839 15:54:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:59.839 ************************************ 00:28:59.839 START TEST bdev_bounds 00:28:59.839 ************************************ 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59917 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:59.839 Process bdevio pid: 59917 00:28:59.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59917' 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59917 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59917 ']' 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:59.839 15:54:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:59.839 [2024-11-05 15:54:21.018200] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:28:59.839 [2024-11-05 15:54:21.018324] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:28:59.839 [2024-11-05 15:54:21.179078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.097 [2024-11-05 15:54:21.281166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.097 [2024-11-05 15:54:21.281578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.097 [2024-11-05 15:54:21.281594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.662 15:54:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.662 15:54:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:29:00.662 15:54:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:00.662 I/O targets: 00:29:00.662 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:00.662 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:00.662 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:00.662 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:00.662 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:00.662 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:00.662 00:29:00.662 00:29:00.662 CUnit - A unit testing framework for C - Version 2.1-3 00:29:00.662 http://cunit.sourceforge.net/ 00:29:00.662 00:29:00.662 00:29:00.662 Suite: bdevio tests on: Nvme3n1 00:29:00.662 Test: blockdev write read block ...passed 00:29:00.662 Test: blockdev write zeroes read block ...passed 00:29:00.662 Test: blockdev write zeroes read no split ...passed 00:29:00.662 Test: blockdev write zeroes read split ...passed 00:29:00.662 Test: blockdev write zeroes read split partial ...passed 00:29:00.663 Test: blockdev reset ...[2024-11-05 15:54:22.015480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:00.663 [2024-11-05 15:54:22.018212] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:00.663 passed 00:29:00.663 Test: blockdev write read 8 blocks ...passed 00:29:00.663 Test: blockdev write read size > 128k ...passed 00:29:00.663 Test: blockdev write read invalid size ...passed 00:29:00.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.663 Test: blockdev write read max offset ...passed 00:29:00.663 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.663 Test: blockdev writev readv 8 blocks ...passed 00:29:00.663 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.663 Test: blockdev writev readv block ...passed 00:29:00.663 Test: blockdev writev readv size > 128k ...passed 00:29:00.663 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.663 Test: blockdev comparev and writev ...[2024-11-05 15:54:22.023242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b180a000 len:0x1000 00:29:00.663 [2024-11-05 15:54:22.023289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:00.663 passed 00:29:00.663 Test: blockdev nvme passthru rw ...passed 00:29:00.663 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:54:22.023749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:00.663 [2024-11-05 15:54:22.023774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:00.663 passed 00:29:00.663 Test: blockdev nvme admin passthru ...passed 00:29:00.663 Test: blockdev copy ...passed 00:29:00.663 Suite: bdevio tests on: Nvme2n3 00:29:00.955 Test: blockdev write read block ...passed 00:29:00.955 Test: blockdev write zeroes read block ...passed 00:29:00.955 Test: blockdev write zeroes read no split ...passed 00:29:00.955 Test: blockdev write zeroes read split ...passed 00:29:00.955 Test: blockdev write zeroes read split partial ...passed 00:29:00.955 Test: blockdev reset ...[2024-11-05 15:54:22.070968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:00.955 passed 00:29:00.955 Test: blockdev write read 8 blocks ...[2024-11-05 15:54:22.073788] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:00.955 passed 00:29:00.955 Test: blockdev write read size > 128k ...passed 00:29:00.955 Test: blockdev write read invalid size ...passed 00:29:00.955 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.955 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.955 Test: blockdev write read max offset ...passed 00:29:00.955 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.955 Test: blockdev writev readv 8 blocks ...passed 00:29:00.955 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.955 Test: blockdev writev readv block ...passed 00:29:00.955 Test: blockdev writev readv size > 128k ...passed 00:29:00.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.956 Test: blockdev comparev and writev ...[2024-11-05 15:54:22.078754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5c06000 len:0x1000 00:29:00.956 [2024-11-05 15:54:22.078800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev nvme passthru rw ...passed 00:29:00.956 Test: blockdev nvme passthru vendor specific ...passed 00:29:00.956 Test: blockdev nvme admin passthru ...[2024-11-05 15:54:22.079229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:00.956 [2024-11-05 15:54:22.079257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev copy ...passed 00:29:00.956 Suite: bdevio tests on: Nvme2n2 00:29:00.956 Test: blockdev write read block ...passed 00:29:00.956 Test: blockdev write zeroes read block ...passed 00:29:00.956 Test: blockdev write zeroes read no split ...passed 00:29:00.956 Test: blockdev write zeroes read split ...passed 00:29:00.956 Test: blockdev write zeroes read split partial ...passed 00:29:00.956 Test: blockdev reset ...[2024-11-05 15:54:22.124079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:00.956 [2024-11-05 15:54:22.126931] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:00.956 passed 00:29:00.956 Test: blockdev write read 8 blocks ...passed 00:29:00.956 Test: blockdev write read size > 128k ...passed 00:29:00.956 Test: blockdev write read invalid size ...passed 00:29:00.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.956 Test: blockdev write read max offset ...passed 00:29:00.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.956 Test: blockdev writev readv 8 blocks ...passed 00:29:00.956 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.956 Test: blockdev writev readv block ...passed 00:29:00.956 Test: blockdev writev readv size > 128k ...passed 00:29:00.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.956 Test: blockdev comparev and writev ...[2024-11-05 15:54:22.133613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c743c000 len:0x1000 00:29:00.956 [2024-11-05 15:54:22.133869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev nvme passthru rw ...passed 00:29:00.956 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:54:22.134835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:00.956 [2024-11-05 15:54:22.135025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev nvme admin passthru ...passed 00:29:00.956 Test: blockdev copy ...passed 00:29:00.956 Suite: bdevio tests on: Nvme2n1 00:29:00.956 Test: blockdev write read block ...passed 00:29:00.956 Test: blockdev write zeroes read block ...passed 00:29:00.956 Test: blockdev write zeroes read no split ...passed 00:29:00.956 Test: blockdev write zeroes read split ...passed 00:29:00.956 Test: blockdev write zeroes read split partial ...passed 00:29:00.956 Test: blockdev reset ...[2024-11-05 15:54:22.191902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:00.956 [2024-11-05 15:54:22.194848] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:00.956 passed 00:29:00.956 Test: blockdev write read 8 blocks ...passed 00:29:00.956 Test: blockdev write read size > 128k ...passed 00:29:00.956 Test: blockdev write read invalid size ...passed 00:29:00.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.956 Test: blockdev write read max offset ...passed 00:29:00.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.956 Test: blockdev writev readv 8 blocks ...passed 00:29:00.956 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.956 Test: blockdev writev readv block ...passed 00:29:00.956 Test: blockdev writev readv size > 128k ...passed 00:29:00.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.956 Test: blockdev comparev and writev ...[2024-11-05 15:54:22.200389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:29:00.956 Test: blockdev nvme passthru rw ...passed 00:29:00.956 Test: blockdev nvme passthru vendor specific ...passed 00:29:00.956 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c7438000 len:0x1000 00:29:00.956 [2024-11-05 15:54:22.200513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:00.956 [2024-11-05 15:54:22.200974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:00.956 [2024-11-05 15:54:22.201002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev copy ...passed 00:29:00.956 Suite: bdevio tests on: Nvme1n1 00:29:00.956 Test: blockdev write read block ...passed 00:29:00.956 Test: blockdev write zeroes read block ...passed 00:29:00.956 Test: blockdev write zeroes read no split ...passed 00:29:00.956 Test: blockdev write zeroes read split ...passed 00:29:00.956 Test: blockdev write zeroes read split partial ...passed 00:29:00.956 Test: blockdev reset ...[2024-11-05 15:54:22.246195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:00.956 passed 00:29:00.956 Test: blockdev write read 8 blocks ...[2024-11-05 15:54:22.248775] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:00.956 passed 00:29:00.956 Test: blockdev write read size > 128k ...passed 00:29:00.956 Test: blockdev write read invalid size ...passed 00:29:00.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.956 Test: blockdev write read max offset ...passed 00:29:00.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.956 Test: blockdev writev readv 8 blocks ...passed 00:29:00.956 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.956 Test: blockdev writev readv block ...passed 00:29:00.956 Test: blockdev writev readv size > 128k ...passed 00:29:00.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.956 Test: blockdev comparev and writev ...[2024-11-05 15:54:22.254764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:29:00.956 Test: blockdev nvme passthru rw ...passed 00:29:00.956 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2c7434000 len:0x1000 00:29:00.956 [2024-11-05 15:54:22.254947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev nvme admin passthru ...[2024-11-05 15:54:22.255509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:00.956 [2024-11-05 15:54:22.255553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev copy ...passed 00:29:00.956 Suite: bdevio tests on: Nvme0n1 00:29:00.956 Test: blockdev write read block ...passed 00:29:00.956 Test: blockdev write zeroes read block ...passed 00:29:00.956 Test: blockdev write zeroes read no split ...passed 00:29:00.956 Test: blockdev write zeroes read split ...passed 00:29:00.956 Test: blockdev write zeroes read split partial ...passed 00:29:00.956 Test: blockdev reset ...[2024-11-05 15:54:22.301960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:00.956 [2024-11-05 15:54:22.304570] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:29:00.956 passed 00:29:00.956 Test: blockdev write read 8 blocks ...passed 00:29:00.956 Test: blockdev write read size > 128k ...passed 00:29:00.956 Test: blockdev write read invalid size ...passed 00:29:00.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.956 Test: blockdev write read max offset ...passed 00:29:00.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.956 Test: blockdev writev readv 8 blocks ...passed 00:29:00.956 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.956 Test: blockdev writev readv block ...passed 00:29:00.956 Test: blockdev writev readv size > 128k ...passed 00:29:00.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.956 Test: blockdev comparev and writev ...[2024-11-05 15:54:22.309925] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:29:00.956 Test: blockdev nvme passthru rw ...ince it has 00:29:00.956 separate metadata which is not supported yet. 00:29:00.956 passed 00:29:00.956 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:54:22.310352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:00.956 [2024-11-05 15:54:22.310472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:29:00.956 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:29:00.956 passed 00:29:00.956 Test: blockdev copy ...passed 00:29:00.956 00:29:00.956 Run Summary: Type Total Ran Passed Failed Inactive 00:29:00.956 suites 6 6 n/a 0 0 00:29:00.956 tests 138 138 138 0 0 00:29:00.956 asserts 893 893 893 0 n/a 00:29:00.956 00:29:00.956 Elapsed time = 0.974 seconds 00:29:00.956 0 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59917 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59917 ']' 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59917 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59917 00:29:01.213 killing process with pid 59917 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59917' 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59917 00:29:01.213 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59917 00:29:01.779 15:54:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:01.779 ************************************ 00:29:01.779 END TEST bdev_bounds 00:29:01.779 ************************************ 00:29:01.779 00:29:01.779 real 0m2.032s 00:29:01.779 user 0m5.247s 00:29:01.779 sys 0m0.248s 00:29:01.779 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.779 15:54:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:01.779 15:54:23 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:01.779 15:54:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:29:01.779 15:54:23 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.779 15:54:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.779 ************************************ 00:29:01.779 START TEST bdev_nbd 00:29:01.779 ************************************ 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:01.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59971 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59971 /var/tmp/spdk-nbd.sock 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 59971 ']' 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:01.779 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:01.779 [2024-11-05 15:54:23.081340] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:01.779 [2024-11-05 15:54:23.081456] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.037 [2024-11-05 15:54:23.253076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.037 [2024-11-05 15:54:23.358424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:02.603 15:54:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:02.861 1+0 records in 00:29:02.861 1+0 records out 00:29:02.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260097 s, 15.7 MB/s 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.861 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:02.862 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.862 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:02.862 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:02.862 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:02.862 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:02.862 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.119 1+0 records in 00:29:03.119 1+0 records out 00:29:03.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338439 s, 12.1 MB/s 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:03.119 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.377 1+0 records in 00:29:03.377 1+0 records out 00:29:03.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033923 s, 12.1 MB/s 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:03.377 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.635 1+0 records in 00:29:03.635 1+0 records out 00:29:03.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423599 s, 9.7 MB/s 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:03.635 15:54:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.893 1+0 records in 00:29:03.893 1+0 records out 00:29:03.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493605 s, 8.3 MB/s 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:03.893 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:04.151 1+0 records in 00:29:04.151 1+0 records out 00:29:04.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041412 s, 9.9 MB/s 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:04.151 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd0", 00:29:04.408 "bdev_name": "Nvme0n1" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd1", 00:29:04.408 "bdev_name": "Nvme1n1" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd2", 00:29:04.408 "bdev_name": "Nvme2n1" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd3", 00:29:04.408 "bdev_name": "Nvme2n2" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd4", 00:29:04.408 "bdev_name": "Nvme2n3" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd5", 00:29:04.408 "bdev_name": "Nvme3n1" 00:29:04.408 } 00:29:04.408 ]' 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd0", 00:29:04.408 "bdev_name": "Nvme0n1" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd1", 00:29:04.408 "bdev_name": "Nvme1n1" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd2", 00:29:04.408 "bdev_name": "Nvme2n1" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd3", 00:29:04.408 "bdev_name": "Nvme2n2" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd4", 00:29:04.408 "bdev_name": "Nvme2n3" 00:29:04.408 }, 00:29:04.408 { 00:29:04.408 "nbd_device": "/dev/nbd5", 00:29:04.408 "bdev_name": "Nvme3n1" 00:29:04.408 } 00:29:04.408 ]' 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:04.408 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:04.666 15:54:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:04.924 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:05.182 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:05.440 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:05.698 15:54:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:05.698 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:05.698 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:05.698 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:05.957 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:05.958 /dev/nbd0 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:05.958 1+0 records in 00:29:05.958 1+0 records out 00:29:05.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528225 s, 7.8 MB/s 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:05.958 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:29:06.215 /dev/nbd1 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:06.215 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:06.216 1+0 records in 00:29:06.216 1+0 records out 00:29:06.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240205 s, 17.1 MB/s 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:06.216 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:29:06.473 /dev/nbd10 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:06.473 1+0 records in 00:29:06.473 1+0 records out 00:29:06.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398918 s, 10.3 MB/s 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:06.473 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:29:06.731 /dev/nbd11 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:29:06.731 15:54:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:06.731 1+0 records in 00:29:06.731 1+0 records out 00:29:06.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289706 s, 14.1 MB/s 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:06.731 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:29:06.989 /dev/nbd12 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:06.990 1+0 records in 00:29:06.990 1+0 records out 00:29:06.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420528 s, 9.7 MB/s 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:06.990 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:29:07.248 /dev/nbd13 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:07.248 1+0 records in 00:29:07.248 1+0 records out 00:29:07.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340143 s, 12.0 MB/s 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:07.248 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd0", 00:29:07.506 "bdev_name": "Nvme0n1" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd1", 00:29:07.506 "bdev_name": "Nvme1n1" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd10", 00:29:07.506 "bdev_name": "Nvme2n1" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd11", 00:29:07.506 "bdev_name": "Nvme2n2" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd12", 00:29:07.506 "bdev_name": "Nvme2n3" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd13", 00:29:07.506 "bdev_name": "Nvme3n1" 00:29:07.506 } 00:29:07.506 ]' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd0", 00:29:07.506 "bdev_name": "Nvme0n1" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd1", 00:29:07.506 "bdev_name": "Nvme1n1" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd10", 00:29:07.506 "bdev_name": "Nvme2n1" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd11", 00:29:07.506 "bdev_name": "Nvme2n2" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd12", 00:29:07.506 "bdev_name": "Nvme2n3" 00:29:07.506 }, 00:29:07.506 { 00:29:07.506 "nbd_device": "/dev/nbd13", 00:29:07.506 "bdev_name": "Nvme3n1" 00:29:07.506 } 00:29:07.506 ]' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:07.506 /dev/nbd1 00:29:07.506 /dev/nbd10 00:29:07.506 /dev/nbd11 00:29:07.506 /dev/nbd12 00:29:07.506 /dev/nbd13' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:07.506 /dev/nbd1 00:29:07.506 /dev/nbd10 00:29:07.506 /dev/nbd11 00:29:07.506 /dev/nbd12 00:29:07.506 /dev/nbd13' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:07.506 256+0 records in 00:29:07.506 256+0 records out 00:29:07.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103977 s, 101 MB/s 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:07.506 256+0 records in 00:29:07.506 256+0 records out 00:29:07.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0623539 s, 16.8 MB/s 00:29:07.506 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.507 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:07.764 256+0 records in 00:29:07.764 256+0 records out 00:29:07.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0674086 s, 15.6 MB/s 00:29:07.764 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.764 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:07.764 256+0 records in 00:29:07.764 256+0 records out 00:29:07.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0646125 s, 16.2 MB/s 00:29:07.764 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.764 15:54:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:07.764 256+0 records in 00:29:07.764 256+0 records out 00:29:07.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0652031 s, 16.1 MB/s 00:29:07.764 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.764 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:07.764 256+0 records in 00:29:07.764 256+0 records out 00:29:07.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0655692 s, 16.0 MB/s 00:29:07.764 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:07.764 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:08.022 256+0 records in 00:29:08.022 256+0 records out 00:29:08.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644469 s, 16.3 MB/s 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:08.022 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:08.279 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:08.280 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.280 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:08.280 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:08.280 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:08.280 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:08.280 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:08.536 15:54:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:08.793 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.059 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:09.329 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:09.587 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:09.844 malloc_lvol_verify 00:29:09.844 15:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:09.844 ae8c161b-aedf-479b-b2d0-9ff87f6a95a7 00:29:09.844 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:10.102 3c0963dd-7a08-4e90-94ef-28f1c5f11332 00:29:10.102 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:10.360 /dev/nbd0 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:10.360 mke2fs 1.47.0 (5-Feb-2023) 00:29:10.360 Discarding device blocks: 0/4096 done 00:29:10.360 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:10.360 00:29:10.360 Allocating group tables: 0/1 done 00:29:10.360 Writing inode tables: 0/1 done 00:29:10.360 Creating journal (1024 blocks): done 00:29:10.360 Writing superblocks and filesystem accounting information: 0/1 done 00:29:10.360 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.360 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59971 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 59971 ']' 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 59971 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59971 00:29:10.618 killing process with pid 59971 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59971' 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 59971 00:29:10.618 15:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 59971 00:29:11.579 15:54:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:11.579 00:29:11.579 real 0m9.684s 00:29:11.579 user 0m13.924s 00:29:11.579 sys 0m3.040s 00:29:11.579 15:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:11.579 ************************************ 00:29:11.579 END TEST bdev_nbd 00:29:11.579 ************************************ 00:29:11.579 15:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:11.579 skipping fio tests on NVMe due to multi-ns failures. 00:29:11.579 15:54:32 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:11.579 15:54:32 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:29:11.579 15:54:32 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:11.579 15:54:32 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:11.579 15:54:32 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:11.579 15:54:32 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:29:11.579 15:54:32 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:11.579 15:54:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:11.579 ************************************ 00:29:11.579 START TEST bdev_verify 00:29:11.579 ************************************ 00:29:11.579 15:54:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:11.579 [2024-11-05 15:54:32.796375] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:11.579 [2024-11-05 15:54:32.796496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60345 ] 00:29:11.837 [2024-11-05 15:54:32.953648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:11.837 [2024-11-05 15:54:33.057673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.837 [2024-11-05 15:54:33.057674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.402 Running I/O for 5 seconds... 00:29:14.743 22272.00 IOPS, 87.00 MiB/s [2024-11-05T15:54:37.038Z] 22496.00 IOPS, 87.88 MiB/s [2024-11-05T15:54:37.970Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-05T15:54:38.903Z] 23008.00 IOPS, 89.88 MiB/s [2024-11-05T15:54:38.903Z] 23462.40 IOPS, 91.65 MiB/s 00:29:17.541 Latency(us) 00:29:17.541 [2024-11-05T15:54:38.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.541 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x0 length 0xbd0bd 00:29:17.541 Nvme0n1 : 5.05 1938.45 7.57 0.00 0.00 65729.99 6175.51 66544.25 00:29:17.541 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:29:17.541 Nvme0n1 : 5.03 1906.98 7.45 0.00 0.00 66855.00 12603.08 75416.81 00:29:17.541 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x0 length 0xa0000 00:29:17.541 Nvme1n1 : 5.07 1945.78 7.60 0.00 0.00 65548.33 11292.36 63317.86 00:29:17.541 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0xa0000 length 0xa0000 00:29:17.541 Nvme1n1 : 5.07 1918.93 7.50 0.00 0.00 66343.71 9830.40 64931.05 00:29:17.541 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x0 length 0x80000 00:29:17.541 Nvme2n1 : 5.07 1945.23 7.60 0.00 0.00 65498.77 11494.01 61704.66 00:29:17.541 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x80000 length 0x80000 00:29:17.541 Nvme2n1 : 5.07 1917.55 7.49 0.00 0.00 66214.81 12502.25 61301.37 00:29:17.541 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x0 length 0x80000 00:29:17.541 Nvme2n2 : 5.07 1944.67 7.60 0.00 0.00 65394.00 11594.83 59688.17 00:29:17.541 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x80000 length 0x80000 00:29:17.541 Nvme2n2 : 5.08 1916.22 7.49 0.00 0.00 66090.14 12048.54 62511.26 00:29:17.541 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x0 length 0x80000 00:29:17.541 Nvme2n3 : 5.07 1944.14 7.59 0.00 0.00 65268.29 11998.13 64124.46 00:29:17.541 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x80000 length 0x80000 00:29:17.541 Nvme2n3 : 5.08 1915.72 7.48 0.00 0.00 65996.97 11443.59 66947.54 00:29:17.541 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x0 length 0x20000 00:29:17.541 Nvme3n1 : 5.07 1942.72 7.59 0.00 0.00 65157.48 9729.58 66140.95 00:29:17.541 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:17.541 Verification LBA range: start 0x20000 length 0x20000 00:29:17.541 Nvme3n1 : 5.08 1915.21 7.48 0.00 0.00 65955.65 10838.65 70173.93 00:29:17.541 [2024-11-05T15:54:38.903Z] =================================================================================================================== 00:29:17.541 [2024-11-05T15:54:38.903Z] Total : 23151.59 90.44 0.00 0.00 65834.06 6175.51 75416.81 00:29:18.929 00:29:18.929 real 0m7.286s 00:29:18.929 user 0m13.695s 00:29:18.929 sys 0m0.189s 00:29:18.929 15:54:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:18.929 ************************************ 00:29:18.929 END TEST bdev_verify 00:29:18.929 15:54:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:18.929 ************************************ 00:29:18.929 15:54:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:18.929 15:54:40 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:29:18.929 15:54:40 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.929 15:54:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:18.929 ************************************ 00:29:18.929 START TEST bdev_verify_big_io 00:29:18.929 ************************************ 00:29:18.929 15:54:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:18.929 [2024-11-05 15:54:40.124683] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:18.929 [2024-11-05 15:54:40.124809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60443 ] 00:29:18.929 [2024-11-05 15:54:40.287154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:19.186 [2024-11-05 15:54:40.388897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.186 [2024-11-05 15:54:40.388921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.751 Running I/O for 5 seconds... 00:29:24.561 1805.00 IOPS, 112.81 MiB/s [2024-11-05T15:54:46.855Z] 2754.50 IOPS, 172.16 MiB/s [2024-11-05T15:54:47.114Z] 2581.00 IOPS, 161.31 MiB/s [2024-11-05T15:54:47.114Z] 2537.75 IOPS, 158.61 MiB/s 00:29:25.752 Latency(us) 00:29:25.752 [2024-11-05T15:54:47.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.752 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x0 length 0xbd0b 00:29:25.752 Nvme0n1 : 5.59 154.51 9.66 0.00 0.00 796165.48 13611.32 929199.66 00:29:25.752 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0xbd0b length 0xbd0b 00:29:25.752 Nvme0n1 : 5.67 113.00 7.06 0.00 0.00 1064177.86 33675.42 1180857.90 00:29:25.752 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x0 length 0xa000 00:29:25.752 Nvme1n1 : 5.59 156.45 9.78 0.00 0.00 765993.76 62511.26 793691.37 00:29:25.752 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0xa000 length 0xa000 00:29:25.752 Nvme1n1 : 5.76 122.23 7.64 0.00 0.00 972752.17 83886.08 974369.08 00:29:25.752 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x0 length 0x8000 00:29:25.752 Nvme2n1 : 5.59 160.15 10.01 0.00 0.00 734802.60 104051.00 719484.46 00:29:25.752 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x8000 length 0x8000 00:29:25.752 Nvme2n1 : 5.85 127.09 7.94 0.00 0.00 904831.00 70173.93 942105.21 00:29:25.752 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x0 length 0x8000 00:29:25.752 Nvme2n2 : 5.70 168.36 10.52 0.00 0.00 682650.81 29642.44 832408.02 00:29:25.752 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x8000 length 0x8000 00:29:25.752 Nvme2n2 : 5.86 122.78 7.67 0.00 0.00 905743.00 21979.77 1974549.27 00:29:25.752 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x0 length 0x8000 00:29:25.752 Nvme2n3 : 5.76 177.69 11.11 0.00 0.00 630358.15 35086.97 851766.35 00:29:25.752 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x8000 length 0x8000 00:29:25.752 Nvme2n3 : 5.91 138.76 8.67 0.00 0.00 776184.04 13712.15 2000360.37 00:29:25.752 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x0 length 0x2000 00:29:25.752 Nvme3n1 : 5.83 197.46 12.34 0.00 0.00 552874.68 419.05 890483.00 00:29:25.752 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:25.752 Verification LBA range: start 0x2000 length 0x2000 00:29:25.752 Nvme3n1 : 6.01 213.64 13.35 0.00 0.00 490173.48 113.43 1167952.34 00:29:25.752 [2024-11-05T15:54:47.114Z] =================================================================================================================== 00:29:25.752 [2024-11-05T15:54:47.114Z] Total : 1852.12 115.76 0.00 0.00 740591.68 113.43 2000360.37 00:29:27.657 ************************************ 00:29:27.657 END TEST bdev_verify_big_io 00:29:27.657 ************************************ 00:29:27.657 00:29:27.657 real 0m8.622s 00:29:27.657 user 0m16.315s 00:29:27.657 sys 0m0.236s 00:29:27.657 15:54:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:27.657 15:54:48 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.657 15:54:48 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:27.657 15:54:48 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:27.657 15:54:48 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:27.657 15:54:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:27.657 ************************************ 00:29:27.657 START TEST bdev_write_zeroes 00:29:27.657 ************************************ 00:29:27.657 15:54:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:27.657 [2024-11-05 15:54:48.791455] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:27.657 [2024-11-05 15:54:48.791701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:29:27.657 [2024-11-05 15:54:48.952889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.915 [2024-11-05 15:54:49.052362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.483 Running I/O for 1 seconds... 00:29:29.416 39471.00 IOPS, 154.18 MiB/s 00:29:29.416 Latency(us) 00:29:29.416 [2024-11-05T15:54:50.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.416 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.416 Nvme0n1 : 1.02 6473.61 25.29 0.00 0.00 19742.51 5444.53 477505.38 00:29:29.416 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.416 Nvme1n1 : 1.02 6683.41 26.11 0.00 0.00 19083.15 8065.97 409751.24 00:29:29.416 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.416 Nvme2n1 : 1.02 6687.40 26.12 0.00 0.00 19045.04 7914.73 411364.43 00:29:29.416 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.416 Nvme2n2 : 1.02 6625.94 25.88 0.00 0.00 19179.62 8166.79 404911.66 00:29:29.416 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.416 Nvme2n3 : 1.03 6618.38 25.85 0.00 0.00 19153.19 8166.79 406524.85 00:29:29.416 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.416 Nvme3n1 : 1.03 6610.78 25.82 0.00 0.00 19140.25 8065.97 409751.24 00:29:29.416 [2024-11-05T15:54:50.778Z] =================================================================================================================== 00:29:29.416 [2024-11-05T15:54:50.778Z] Total : 39699.50 155.08 0.00 0.00 19221.28 5444.53 477505.38 00:29:30.348 00:29:30.348 real 0m2.651s 00:29:30.348 user 0m2.351s 00:29:30.348 sys 0m0.182s 00:29:30.348 15:54:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:30.348 ************************************ 00:29:30.348 END TEST bdev_write_zeroes 00:29:30.348 ************************************ 00:29:30.348 15:54:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:30.348 15:54:51 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.348 15:54:51 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:30.348 15:54:51 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:30.348 15:54:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:30.348 ************************************ 00:29:30.348 START TEST bdev_json_nonenclosed 00:29:30.348 ************************************ 00:29:30.348 15:54:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.348 [2024-11-05 15:54:51.484194] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:30.348 [2024-11-05 15:54:51.484452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:29:30.348 [2024-11-05 15:54:51.644066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.606 [2024-11-05 15:54:51.745915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.606 [2024-11-05 15:54:51.745997] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:30.606 [2024-11-05 15:54:51.746014] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:30.606 [2024-11-05 15:54:51.746023] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:30.606 00:29:30.606 real 0m0.506s 00:29:30.606 user 0m0.309s 00:29:30.606 sys 0m0.092s 00:29:30.606 15:54:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:30.606 ************************************ 00:29:30.606 END TEST bdev_json_nonenclosed 00:29:30.606 ************************************ 00:29:30.606 15:54:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:30.606 15:54:51 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.606 15:54:51 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:30.606 15:54:51 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:30.606 15:54:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:30.606 ************************************ 00:29:30.606 START TEST bdev_json_nonarray 00:29:30.606 ************************************ 00:29:30.606 15:54:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.864 [2024-11-05 15:54:52.027862] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:30.864 [2024-11-05 15:54:52.027972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:29:30.864 [2024-11-05 15:54:52.185162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.122 [2024-11-05 15:54:52.286476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.122 [2024-11-05 15:54:52.286562] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:31.122 [2024-11-05 15:54:52.286580] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:31.122 [2024-11-05 15:54:52.286589] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:31.122 00:29:31.122 real 0m0.505s 00:29:31.122 user 0m0.306s 00:29:31.122 sys 0m0.095s 00:29:31.122 15:54:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:31.122 15:54:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:31.122 ************************************ 00:29:31.122 END TEST bdev_json_nonarray 00:29:31.122 ************************************ 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:31.380 15:54:52 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:31.380 ************************************ 00:29:31.380 END TEST blockdev_nvme 00:29:31.380 ************************************ 00:29:31.380 00:29:31.380 real 0m36.224s 00:29:31.380 user 0m56.612s 00:29:31.380 sys 0m4.947s 00:29:31.380 15:54:52 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:31.380 15:54:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:31.380 15:54:52 -- spdk/autotest.sh@209 -- # uname -s 00:29:31.380 15:54:52 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:29:31.380 15:54:52 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:31.380 15:54:52 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:31.380 15:54:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:31.380 15:54:52 -- common/autotest_common.sh@10 -- # set +x 00:29:31.380 ************************************ 00:29:31.380 START TEST blockdev_nvme_gpt 00:29:31.380 ************************************ 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:31.380 * Looking for test storage... 00:29:31.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.380 15:54:52 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:31.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.380 --rc genhtml_branch_coverage=1 00:29:31.380 --rc genhtml_function_coverage=1 00:29:31.380 --rc genhtml_legend=1 00:29:31.380 --rc geninfo_all_blocks=1 00:29:31.380 --rc geninfo_unexecuted_blocks=1 00:29:31.380 00:29:31.380 ' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:31.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.380 --rc genhtml_branch_coverage=1 00:29:31.380 --rc genhtml_function_coverage=1 00:29:31.380 --rc genhtml_legend=1 00:29:31.380 --rc geninfo_all_blocks=1 00:29:31.380 --rc geninfo_unexecuted_blocks=1 00:29:31.380 00:29:31.380 ' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:31.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.380 --rc genhtml_branch_coverage=1 00:29:31.380 --rc genhtml_function_coverage=1 00:29:31.380 --rc genhtml_legend=1 00:29:31.380 --rc geninfo_all_blocks=1 00:29:31.380 --rc geninfo_unexecuted_blocks=1 00:29:31.380 00:29:31.380 ' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:31.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.380 --rc genhtml_branch_coverage=1 00:29:31.380 --rc genhtml_function_coverage=1 00:29:31.380 --rc genhtml_legend=1 00:29:31.380 --rc geninfo_all_blocks=1 00:29:31.380 --rc geninfo_unexecuted_blocks=1 00:29:31.380 00:29:31.380 ' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60711 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60711 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60711 ']' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:31.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:31.380 15:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:31.380 15:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:31.638 [2024-11-05 15:54:52.763942] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:31.638 [2024-11-05 15:54:52.764063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60711 ] 00:29:31.638 [2024-11-05 15:54:52.921711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.895 [2024-11-05 15:54:53.024526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.459 15:54:53 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:32.459 15:54:53 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:29:32.459 15:54:53 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:32.459 15:54:53 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:29:32.459 15:54:53 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:32.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:32.716 Waiting for block devices as requested 00:29:32.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:32.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:32.973 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:32.973 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:38.232 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:38.232 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.232 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:38.233 BYT; 00:29:38.233 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:38.233 BYT; 00:29:38.233 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:38.233 15:54:59 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:38.233 15:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:39.184 The operation has completed successfully. 00:29:39.184 15:55:00 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:40.116 The operation has completed successfully. 00:29:40.116 15:55:01 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:40.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:40.960 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:40.960 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:41.219 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:41.219 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:41.219 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:29:41.219 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.219 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.219 [] 00:29:41.219 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.219 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:29:41.219 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:29:41.219 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:41.219 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:41.219 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:41.219 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.219 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.477 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:41.477 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:41.735 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ead8553a-0aa2-489d-a608-a2d08c89f4de"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ead8553a-0aa2-489d-a608-a2d08c89f4de",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "457faae8-d3fc-4ed7-853a-95a8b6597f57"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "457faae8-d3fc-4ed7-853a-95a8b6597f57",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c63f2825-575a-4ea1-a897-c54905e13857"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c63f2825-575a-4ea1-a897-c54905e13857",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e53dc4b0-00b2-405a-94e6-ca34b2bd8a20"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e53dc4b0-00b2-405a-94e6-ca34b2bd8a20",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1cb14f9c-5bf4-4eb7-8606-06d8ed02d27c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1cb14f9c-5bf4-4eb7-8606-06d8ed02d27c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:41.736 15:55:02 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60711 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60711 ']' 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60711 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60711 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:41.736 killing process with pid 60711 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60711' 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60711 00:29:41.736 15:55:02 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60711 00:29:43.111 15:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:43.111 15:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:43.111 15:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:29:43.111 15:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:43.111 15:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 ************************************ 00:29:43.111 START TEST bdev_hello_world 00:29:43.111 ************************************ 00:29:43.111 15:55:04 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:43.111 [2024-11-05 15:55:04.444771] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:43.111 [2024-11-05 15:55:04.444866] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61336 ] 00:29:43.369 [2024-11-05 15:55:04.599052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.369 [2024-11-05 15:55:04.700363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.936 [2024-11-05 15:55:05.238941] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:43.936 [2024-11-05 15:55:05.238991] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:43.936 [2024-11-05 15:55:05.239013] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:43.936 [2024-11-05 15:55:05.241451] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:43.936 [2024-11-05 15:55:05.242050] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:43.936 [2024-11-05 15:55:05.242076] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:43.936 [2024-11-05 15:55:05.242202] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:43.936 00:29:43.936 [2024-11-05 15:55:05.242219] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:44.868 00:29:44.868 real 0m1.561s 00:29:44.868 user 0m1.296s 00:29:44.868 sys 0m0.158s 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:44.868 ************************************ 00:29:44.868 END TEST bdev_hello_world 00:29:44.868 ************************************ 00:29:44.868 15:55:05 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:44.868 15:55:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:44.868 15:55:05 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:44.868 15:55:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:44.868 ************************************ 00:29:44.868 START TEST bdev_bounds 00:29:44.868 ************************************ 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61372 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:44.868 Process bdevio pid: 61372 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61372' 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61372 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61372 ']' 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:44.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.868 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.869 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:44.869 15:55:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:44.869 15:55:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:44.869 [2024-11-05 15:55:06.057000] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:44.869 [2024-11-05 15:55:06.057125] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61372 ] 00:29:44.869 [2024-11-05 15:55:06.214068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:45.127 [2024-11-05 15:55:06.316830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.127 [2024-11-05 15:55:06.316910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.127 [2024-11-05 15:55:06.316919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.704 15:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:45.704 15:55:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:29:45.704 15:55:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:45.704 I/O targets: 00:29:45.704 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:45.704 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:45.705 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:45.705 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:45.705 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:45.705 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:45.705 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:45.705 00:29:45.705 00:29:45.705 CUnit - A unit testing framework for C - Version 2.1-3 00:29:45.705 http://cunit.sourceforge.net/ 00:29:45.705 00:29:45.705 00:29:45.705 Suite: bdevio tests on: Nvme3n1 00:29:45.705 Test: blockdev write read block ...passed 00:29:45.705 Test: blockdev write zeroes read block ...passed 00:29:45.705 Test: blockdev write zeroes read no split ...passed 00:29:45.705 Test: blockdev write zeroes read split ...passed 00:29:45.705 Test: blockdev write zeroes read split partial ...passed 00:29:45.705 Test: blockdev reset ...[2024-11-05 15:55:07.053525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:45.705 [2024-11-05 15:55:07.056347] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:45.705 passed 00:29:45.705 Test: blockdev write read 8 blocks ...passed 00:29:45.705 Test: blockdev write read size > 128k ...passed 00:29:45.705 Test: blockdev write read invalid size ...passed 00:29:45.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:45.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:45.705 Test: blockdev write read max offset ...passed 00:29:45.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:45.705 Test: blockdev writev readv 8 blocks ...passed 00:29:45.705 Test: blockdev writev readv 30 x 1block ...passed 00:29:45.705 Test: blockdev writev readv block ...passed 00:29:45.705 Test: blockdev writev readv size > 128k ...passed 00:29:45.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:45.705 Test: blockdev comparev and writev ...[2024-11-05 15:55:07.062014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bee04000 len:0x1000 00:29:45.705 [2024-11-05 15:55:07.062364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:45.705 passed 00:29:45.705 Test: blockdev nvme passthru rw ...passed 00:29:45.705 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:55:07.063147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:45.705 [2024-11-05 15:55:07.063288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:45.705 passed 00:29:45.705 Test: blockdev nvme admin passthru ...passed 00:29:45.705 Test: blockdev copy ...passed 00:29:45.705 Suite: bdevio tests on: Nvme2n3 00:29:45.705 Test: blockdev write read block ...passed 00:29:45.705 Test: blockdev write zeroes read block ...passed 00:29:45.965 Test: blockdev write zeroes read no split ...passed 00:29:45.965 Test: blockdev write zeroes read split ...passed 00:29:45.965 Test: blockdev write zeroes read split partial ...passed 00:29:45.965 Test: blockdev reset ...[2024-11-05 15:55:07.106815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:45.965 [2024-11-05 15:55:07.109747] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:45.965 passed 00:29:45.965 Test: blockdev write read 8 blocks ...passed 00:29:45.965 Test: blockdev write read size > 128k ...passed 00:29:45.965 Test: blockdev write read invalid size ...passed 00:29:45.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:45.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:45.965 Test: blockdev write read max offset ...passed 00:29:45.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:45.965 Test: blockdev writev readv 8 blocks ...passed 00:29:45.965 Test: blockdev writev readv 30 x 1block ...passed 00:29:45.965 Test: blockdev writev readv block ...passed 00:29:45.965 Test: blockdev writev readv size > 128k ...passed 00:29:45.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:45.965 Test: blockdev comparev and writev ...[2024-11-05 15:55:07.119061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bee02000 len:0x1000 00:29:45.965 [2024-11-05 15:55:07.119239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:45.965 passed 00:29:45.965 Test: blockdev nvme passthru rw ...passed 00:29:45.965 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:55:07.119957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:45.965 [2024-11-05 15:55:07.120053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:45.965 passed 00:29:45.965 Test: blockdev nvme admin passthru ...passed 00:29:45.965 Test: blockdev copy ...passed 00:29:45.965 Suite: bdevio tests on: Nvme2n2 00:29:45.965 Test: blockdev write read block ...passed 00:29:45.965 Test: blockdev write zeroes read block ...passed 00:29:45.965 Test: blockdev write zeroes read no split ...passed 00:29:45.965 Test: blockdev write zeroes read split ...passed 00:29:45.965 Test: blockdev write zeroes read split partial ...passed 00:29:45.965 Test: blockdev reset ...[2024-11-05 15:55:07.182966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:45.965 [2024-11-05 15:55:07.186017] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:45.965 passed 00:29:45.965 Test: blockdev write read 8 blocks ...passed 00:29:45.965 Test: blockdev write read size > 128k ...passed 00:29:45.965 Test: blockdev write read invalid size ...passed 00:29:45.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:45.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:45.965 Test: blockdev write read max offset ...passed 00:29:45.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:45.965 Test: blockdev writev readv 8 blocks ...passed 00:29:45.965 Test: blockdev writev readv 30 x 1block ...passed 00:29:45.965 Test: blockdev writev readv block ...passed 00:29:45.965 Test: blockdev writev readv size > 128k ...passed 00:29:45.965 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:45.965 Test: blockdev comparev and writev ...[2024-11-05 15:55:07.191631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6638000 len:0x1000 00:29:45.965 [2024-11-05 15:55:07.191753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:45.965 passed 00:29:45.965 Test: blockdev nvme passthru rw ...passed 00:29:45.965 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:55:07.192383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:45.965 passed 00:29:45.965 Test: blockdev nvme admin passthru ...[2024-11-05 15:55:07.192454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:45.965 passed 00:29:45.965 Test: blockdev copy ...passed 00:29:45.965 Suite: bdevio tests on: Nvme2n1 00:29:45.965 Test: blockdev write read block ...passed 00:29:45.965 Test: blockdev write zeroes read block ...passed 00:29:45.965 Test: blockdev write zeroes read no split ...passed 00:29:45.965 Test: blockdev write zeroes read split ...passed 00:29:45.965 Test: blockdev write zeroes read split partial ...passed 00:29:45.965 Test: blockdev reset ...[2024-11-05 15:55:07.234705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:45.965 [2024-11-05 15:55:07.237668] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:45.965 passed 00:29:45.965 Test: blockdev write read 8 blocks ...passed 00:29:45.965 Test: blockdev write read size > 128k ...passed 00:29:45.965 Test: blockdev write read invalid size ...passed 00:29:45.965 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:45.965 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:45.965 Test: blockdev write read max offset ...passed 00:29:45.965 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:45.966 Test: blockdev writev readv 8 blocks ...passed 00:29:45.966 Test: blockdev writev readv 30 x 1block ...passed 00:29:45.966 Test: blockdev writev readv block ...passed 00:29:45.966 Test: blockdev writev readv size > 128k ...passed 00:29:45.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:45.966 Test: blockdev comparev and writev ...[2024-11-05 15:55:07.243087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6634000 len:0x1000 00:29:45.966 [2024-11-05 15:55:07.243197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:45.966 passed 00:29:45.966 Test: blockdev nvme passthru rw ...passed 00:29:45.966 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:55:07.243751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:45.966 [2024-11-05 15:55:07.243829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:45.966 passed 00:29:45.966 Test: blockdev nvme admin passthru ...passed 00:29:45.966 Test: blockdev copy ...passed 00:29:45.966 Suite: bdevio tests on: Nvme1n1p2 00:29:45.966 Test: blockdev write read block ...passed 00:29:45.966 Test: blockdev write zeroes read block ...passed 00:29:45.966 Test: blockdev write zeroes read no split ...passed 00:29:45.966 Test: blockdev write zeroes read split ...passed 00:29:45.966 Test: blockdev write zeroes read split partial ...passed 00:29:45.966 Test: blockdev reset ...[2024-11-05 15:55:07.287372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:45.966 [2024-11-05 15:55:07.289990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:45.966 passed 00:29:45.966 Test: blockdev write read 8 blocks ...passed 00:29:45.966 Test: blockdev write read size > 128k ...passed 00:29:45.966 Test: blockdev write read invalid size ...passed 00:29:45.966 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:45.966 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:45.966 Test: blockdev write read max offset ...passed 00:29:45.966 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:45.966 Test: blockdev writev readv 8 blocks ...passed 00:29:45.966 Test: blockdev writev readv 30 x 1block ...passed 00:29:45.966 Test: blockdev writev readv block ...passed 00:29:45.966 Test: blockdev writev readv size > 128k ...passed 00:29:45.966 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:45.966 Test: blockdev comparev and writev ...[2024-11-05 15:55:07.295339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d6630000 len:0x1000 00:29:45.966 [2024-11-05 15:55:07.295438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:45.966 passed 00:29:45.966 Test: blockdev nvme passthru rw ...passed 00:29:45.966 Test: blockdev nvme passthru vendor specific ...passed 00:29:45.966 Test: blockdev nvme admin passthru ...passed 00:29:45.966 Test: blockdev copy ...passed 00:29:45.966 Suite: bdevio tests on: Nvme1n1p1 00:29:45.966 Test: blockdev write read block ...passed 00:29:45.966 Test: blockdev write zeroes read block ...passed 00:29:45.966 Test: blockdev write zeroes read no split ...passed 00:29:45.966 Test: blockdev write zeroes read split ...passed 00:29:46.225 Test: blockdev write zeroes read split partial ...passed 00:29:46.225 Test: blockdev reset ...[2024-11-05 15:55:07.338047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:46.225 [2024-11-05 15:55:07.340690] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:46.225 passed 00:29:46.225 Test: blockdev write read 8 blocks ...passed 00:29:46.225 Test: blockdev write read size > 128k ...passed 00:29:46.225 Test: blockdev write read invalid size ...passed 00:29:46.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:46.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:46.225 Test: blockdev write read max offset ...passed 00:29:46.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:46.225 Test: blockdev writev readv 8 blocks ...passed 00:29:46.225 Test: blockdev writev readv 30 x 1block ...passed 00:29:46.225 Test: blockdev writev readv block ...passed 00:29:46.225 Test: blockdev writev readv size > 128k ...passed 00:29:46.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:46.225 Test: blockdev comparev and writev ...[2024-11-05 15:55:07.346000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2be40e000 len:0x1000 00:29:46.225 [2024-11-05 15:55:07.346104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:46.225 passed 00:29:46.225 Test: blockdev nvme passthru rw ...passed 00:29:46.225 Test: blockdev nvme passthru vendor specific ...passed 00:29:46.225 Test: blockdev nvme admin passthru ...passed 00:29:46.225 Test: blockdev copy ...passed 00:29:46.225 Suite: bdevio tests on: Nvme0n1 00:29:46.225 Test: blockdev write read block ...passed 00:29:46.225 Test: blockdev write zeroes read block ...passed 00:29:46.225 Test: blockdev write zeroes read no split ...passed 00:29:46.225 Test: blockdev write zeroes read split ...passed 00:29:46.225 Test: blockdev write zeroes read split partial ...passed 00:29:46.225 Test: blockdev reset ...[2024-11-05 15:55:07.387149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:46.225 [2024-11-05 15:55:07.389722] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:29:46.225 passed 00:29:46.226 Test: blockdev write read 8 blocks ...passed 00:29:46.226 Test: blockdev write read size > 128k ...passed 00:29:46.226 Test: blockdev write read invalid size ...passed 00:29:46.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:46.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:46.226 Test: blockdev write read max offset ...passed 00:29:46.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:46.226 Test: blockdev writev readv 8 blocks ...passed 00:29:46.226 Test: blockdev writev readv 30 x 1block ...passed 00:29:46.226 Test: blockdev writev readv block ...passed 00:29:46.226 Test: blockdev writev readv size > 128k ...passed 00:29:46.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:46.226 Test: blockdev comparev and writev ...passed 00:29:46.226 Test: blockdev nvme passthru rw ...[2024-11-05 15:55:07.395135] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:46.226 separate metadata which is not supported yet. 00:29:46.226 passed 00:29:46.226 Test: blockdev nvme passthru vendor specific ...[2024-11-05 15:55:07.395531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:46.226 [2024-11-05 15:55:07.395611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:46.226 passed 00:29:46.226 Test: blockdev nvme admin passthru ...passed 00:29:46.226 Test: blockdev copy ...passed 00:29:46.226 00:29:46.226 Run Summary: Type Total Ran Passed Failed Inactive 00:29:46.226 suites 7 7 n/a 0 0 00:29:46.226 tests 161 161 161 0 0 00:29:46.226 asserts 1025 1025 1025 0 n/a 00:29:46.226 00:29:46.226 Elapsed time = 1.049 seconds 00:29:46.226 0 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61372 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61372 ']' 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61372 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61372 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61372' 00:29:46.226 killing process with pid 61372 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61372 00:29:46.226 15:55:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61372 00:29:46.793 15:55:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:46.793 00:29:46.793 real 0m2.102s 00:29:46.793 user 0m5.414s 00:29:46.793 sys 0m0.263s 00:29:46.793 15:55:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:46.793 15:55:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:46.793 ************************************ 00:29:46.793 END TEST bdev_bounds 00:29:46.793 ************************************ 00:29:46.793 15:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:46.793 15:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:29:46.793 15:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:46.793 15:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:46.793 ************************************ 00:29:46.793 START TEST bdev_nbd 00:29:46.793 ************************************ 00:29:46.793 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:46.793 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61426 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61426 /var/tmp/spdk-nbd.sock 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61426 ']' 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:46.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:46.794 15:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:47.052 [2024-11-05 15:55:08.206361] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:47.052 [2024-11-05 15:55:08.206480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.052 [2024-11-05 15:55:08.363934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.311 [2024-11-05 15:55:08.467472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:47.877 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:48.135 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:48.135 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:48.135 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:48.135 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.136 1+0 records in 00:29:48.136 1+0 records out 00:29:48.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377386 s, 10.9 MB/s 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.136 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.423 1+0 records in 00:29:48.423 1+0 records out 00:29:48.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327993 s, 12.5 MB/s 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:48.423 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.424 1+0 records in 00:29:48.424 1+0 records out 00:29:48.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485221 s, 8.4 MB/s 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.424 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:48.683 15:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.683 1+0 records in 00:29:48.683 1+0 records out 00:29:48.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044649 s, 9.2 MB/s 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.683 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.941 1+0 records in 00:29:48.941 1+0 records out 00:29:48.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491362 s, 8.3 MB/s 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:48.941 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:49.198 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.198 1+0 records in 00:29:49.198 1+0 records out 00:29:49.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370212 s, 11.1 MB/s 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:49.199 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:49.456 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.457 1+0 records in 00:29:49.457 1+0 records out 00:29:49.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618221 s, 6.6 MB/s 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:49.457 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd0", 00:29:49.715 "bdev_name": "Nvme0n1" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd1", 00:29:49.715 "bdev_name": "Nvme1n1p1" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd2", 00:29:49.715 "bdev_name": "Nvme1n1p2" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd3", 00:29:49.715 "bdev_name": "Nvme2n1" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd4", 00:29:49.715 "bdev_name": "Nvme2n2" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd5", 00:29:49.715 "bdev_name": "Nvme2n3" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd6", 00:29:49.715 "bdev_name": "Nvme3n1" 00:29:49.715 } 00:29:49.715 ]' 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd0", 00:29:49.715 "bdev_name": "Nvme0n1" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd1", 00:29:49.715 "bdev_name": "Nvme1n1p1" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd2", 00:29:49.715 "bdev_name": "Nvme1n1p2" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd3", 00:29:49.715 "bdev_name": "Nvme2n1" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd4", 00:29:49.715 "bdev_name": "Nvme2n2" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd5", 00:29:49.715 "bdev_name": "Nvme2n3" 00:29:49.715 }, 00:29:49.715 { 00:29:49.715 "nbd_device": "/dev/nbd6", 00:29:49.715 "bdev_name": "Nvme3n1" 00:29:49.715 } 00:29:49.715 ]' 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.715 15:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.974 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.975 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.234 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.494 15:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.753 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.012 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:29:51.270 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.271 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:51.529 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:51.788 /dev/nbd0 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:51.788 1+0 records in 00:29:51.788 1+0 records out 00:29:51.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363264 s, 11.3 MB/s 00:29:51.788 15:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:51.788 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:29:52.047 /dev/nbd1 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.047 1+0 records in 00:29:52.047 1+0 records out 00:29:52.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589731 s, 6.9 MB/s 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.047 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:29:52.305 /dev/nbd10 00:29:52.305 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:52.305 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:52.305 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:29:52.305 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:52.305 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.306 1+0 records in 00:29:52.306 1+0 records out 00:29:52.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410874 s, 10.0 MB/s 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.306 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:29:52.564 /dev/nbd11 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.564 1+0 records in 00:29:52.564 1+0 records out 00:29:52.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562952 s, 7.3 MB/s 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.564 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:29:52.564 /dev/nbd12 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.822 1+0 records in 00:29:52.822 1+0 records out 00:29:52.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333146 s, 12.3 MB/s 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:52.822 15:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:29:52.822 /dev/nbd13 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.822 1+0 records in 00:29:52.822 1+0 records out 00:29:52.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537047 s, 7.6 MB/s 00:29:52.822 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.080 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:53.081 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.081 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:53.081 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:53.081 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.081 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:53.081 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:29:53.081 /dev/nbd14 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.339 1+0 records in 00:29:53.339 1+0 records out 00:29:53.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646333 s, 6.3 MB/s 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd0", 00:29:53.339 "bdev_name": "Nvme0n1" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd1", 00:29:53.339 "bdev_name": "Nvme1n1p1" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd10", 00:29:53.339 "bdev_name": "Nvme1n1p2" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd11", 00:29:53.339 "bdev_name": "Nvme2n1" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd12", 00:29:53.339 "bdev_name": "Nvme2n2" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd13", 00:29:53.339 "bdev_name": "Nvme2n3" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd14", 00:29:53.339 "bdev_name": "Nvme3n1" 00:29:53.339 } 00:29:53.339 ]' 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd0", 00:29:53.339 "bdev_name": "Nvme0n1" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd1", 00:29:53.339 "bdev_name": "Nvme1n1p1" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd10", 00:29:53.339 "bdev_name": "Nvme1n1p2" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd11", 00:29:53.339 "bdev_name": "Nvme2n1" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd12", 00:29:53.339 "bdev_name": "Nvme2n2" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd13", 00:29:53.339 "bdev_name": "Nvme2n3" 00:29:53.339 }, 00:29:53.339 { 00:29:53.339 "nbd_device": "/dev/nbd14", 00:29:53.339 "bdev_name": "Nvme3n1" 00:29:53.339 } 00:29:53.339 ]' 00:29:53.339 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:53.634 /dev/nbd1 00:29:53.634 /dev/nbd10 00:29:53.634 /dev/nbd11 00:29:53.634 /dev/nbd12 00:29:53.634 /dev/nbd13 00:29:53.634 /dev/nbd14' 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:53.634 /dev/nbd1 00:29:53.634 /dev/nbd10 00:29:53.634 /dev/nbd11 00:29:53.634 /dev/nbd12 00:29:53.634 /dev/nbd13 00:29:53.634 /dev/nbd14' 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:53.634 256+0 records in 00:29:53.634 256+0 records out 00:29:53.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00819188 s, 128 MB/s 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:53.634 256+0 records in 00:29:53.634 256+0 records out 00:29:53.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660878 s, 15.9 MB/s 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:53.634 256+0 records in 00:29:53.634 256+0 records out 00:29:53.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0685301 s, 15.3 MB/s 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:53.634 256+0 records in 00:29:53.634 256+0 records out 00:29:53.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0632647 s, 16.6 MB/s 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:53.634 256+0 records in 00:29:53.634 256+0 records out 00:29:53.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0611467 s, 17.1 MB/s 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.634 15:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:53.895 256+0 records in 00:29:53.895 256+0 records out 00:29:53.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0697318 s, 15.0 MB/s 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:53.895 256+0 records in 00:29:53.895 256+0 records out 00:29:53.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0594215 s, 17.6 MB/s 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:29:53.895 256+0 records in 00:29:53.895 256+0 records out 00:29:53.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647395 s, 16.2 MB/s 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.895 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:53.896 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.154 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.415 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.674 15:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.951 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.234 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.492 15:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:55.867 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:56.126 malloc_lvol_verify 00:29:56.126 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:56.126 02bdd661-d62c-4219-ad3f-4db119995178 00:29:56.387 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:56.387 79b5814c-5380-4c64-8d9d-f4d135a975ad 00:29:56.387 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:56.645 /dev/nbd0 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:56.645 mke2fs 1.47.0 (5-Feb-2023) 00:29:56.645 Discarding device blocks: 0/4096 done 00:29:56.645 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:56.645 00:29:56.645 Allocating group tables: 0/1 done 00:29:56.645 Writing inode tables: 0/1 done 00:29:56.645 Creating journal (1024 blocks): done 00:29:56.645 Writing superblocks and filesystem accounting information: 0/1 done 00:29:56.645 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.645 15:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61426 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61426 ']' 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61426 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61426 00:29:56.903 killing process with pid 61426 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61426' 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61426 00:29:56.903 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61426 00:29:57.837 ************************************ 00:29:57.837 END TEST bdev_nbd 00:29:57.837 ************************************ 00:29:57.837 15:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:57.837 00:29:57.837 real 0m10.727s 00:29:57.837 user 0m15.488s 00:29:57.837 sys 0m3.596s 00:29:57.837 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:57.837 15:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:57.837 15:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:57.837 15:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:29:57.838 15:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:29:57.838 skipping fio tests on NVMe due to multi-ns failures. 00:29:57.838 15:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:57.838 15:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:57.838 15:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:57.838 15:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:29:57.838 15:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:57.838 15:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:57.838 ************************************ 00:29:57.838 START TEST bdev_verify 00:29:57.838 ************************************ 00:29:57.838 15:55:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:57.838 [2024-11-05 15:55:18.968914] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:29:57.838 [2024-11-05 15:55:18.969041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61836 ] 00:29:57.838 [2024-11-05 15:55:19.124532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:58.096 [2024-11-05 15:55:19.214908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.096 [2024-11-05 15:55:19.215148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.399 Running I/O for 5 seconds... 00:30:00.710 21760.00 IOPS, 85.00 MiB/s [2024-11-05T15:55:23.006Z] 22880.00 IOPS, 89.38 MiB/s [2024-11-05T15:55:24.380Z] 22698.67 IOPS, 88.67 MiB/s [2024-11-05T15:55:24.947Z] 22496.00 IOPS, 87.88 MiB/s [2024-11-05T15:55:24.947Z] 22233.60 IOPS, 86.85 MiB/s 00:30:03.585 Latency(us) 00:30:03.585 [2024-11-05T15:55:24.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.585 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0xbd0bd 00:30:03.585 Nvme0n1 : 5.07 1541.16 6.02 0.00 0.00 82875.48 15123.69 82272.89 00:30:03.585 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:30:03.585 Nvme0n1 : 5.06 1592.22 6.22 0.00 0.00 80161.05 14922.04 82676.18 00:30:03.585 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0x4ff80 00:30:03.585 Nvme1n1p1 : 5.07 1539.32 6.01 0.00 0.00 82819.62 16232.76 77030.01 00:30:03.585 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:03.585 Nvme1n1p1 : 5.07 1591.70 6.22 0.00 0.00 79849.47 16031.11 71383.83 00:30:03.585 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0x4ff7f 00:30:03.585 Nvme1n1p2 : 5.08 1538.51 6.01 0.00 0.00 82788.45 19459.15 75416.81 00:30:03.585 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:03.585 Nvme1n1p2 : 5.07 1590.40 6.21 0.00 0.00 79708.76 16434.41 68964.04 00:30:03.585 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0x80000 00:30:03.585 Nvme2n1 : 5.08 1538.04 6.01 0.00 0.00 82688.26 21878.94 71787.13 00:30:03.585 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x80000 length 0x80000 00:30:03.585 Nvme2n1 : 5.07 1589.95 6.21 0.00 0.00 79595.65 18249.26 65737.65 00:30:03.585 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0x80000 00:30:03.585 Nvme2n2 : 5.08 1537.59 6.01 0.00 0.00 82575.06 19862.45 73803.62 00:30:03.585 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x80000 length 0x80000 00:30:03.585 Nvme2n2 : 5.09 1598.37 6.24 0.00 0.00 79070.21 3112.96 69367.34 00:30:03.585 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0x80000 00:30:03.585 Nvme2n3 : 5.08 1537.12 6.00 0.00 0.00 82463.72 17845.96 76223.41 00:30:03.585 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x80000 length 0x80000 00:30:03.585 Nvme2n3 : 5.10 1607.56 6.28 0.00 0.00 78559.70 8318.03 72190.42 00:30:03.585 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x0 length 0x20000 00:30:03.585 Nvme3n1 : 5.08 1536.64 6.00 0.00 0.00 82241.43 15829.46 78643.20 00:30:03.585 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:03.585 Verification LBA range: start 0x20000 length 0x20000 00:30:03.585 Nvme3n1 : 5.10 1607.14 6.28 0.00 0.00 78506.26 8267.62 74206.92 00:30:03.585 [2024-11-05T15:55:24.948Z] =================================================================================================================== 00:30:03.586 [2024-11-05T15:55:24.948Z] Total : 21945.73 85.73 0.00 0.00 80959.90 3112.96 82676.18 00:30:04.958 00:30:04.958 real 0m7.215s 00:30:04.958 user 0m13.533s 00:30:04.958 sys 0m0.211s 00:30:04.958 15:55:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:04.958 15:55:26 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:04.958 ************************************ 00:30:04.958 END TEST bdev_verify 00:30:04.958 ************************************ 00:30:04.958 15:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:04.958 15:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:30:04.959 15:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:04.959 15:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:04.959 ************************************ 00:30:04.959 START TEST bdev_verify_big_io 00:30:04.959 ************************************ 00:30:04.959 15:55:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:04.959 [2024-11-05 15:55:26.219120] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:30:04.959 [2024-11-05 15:55:26.219223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61934 ] 00:30:05.216 [2024-11-05 15:55:26.370156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.216 [2024-11-05 15:55:26.462965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.216 [2024-11-05 15:55:26.463114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.780 Running I/O for 5 seconds... 00:30:11.858 1670.00 IOPS, 104.38 MiB/s [2024-11-05T15:55:33.477Z] 3414.50 IOPS, 213.41 MiB/s [2024-11-05T15:55:33.477Z] 3074.33 IOPS, 192.15 MiB/s 00:30:12.115 Latency(us) 00:30:12.115 [2024-11-05T15:55:33.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.115 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.115 Verification LBA range: start 0x0 length 0xbd0b 00:30:12.115 Nvme0n1 : 5.97 90.29 5.64 0.00 0.00 1334867.87 13308.85 1509949.44 00:30:12.115 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.115 Verification LBA range: start 0xbd0b length 0xbd0b 00:30:12.116 Nvme0n1 : 5.66 101.75 6.36 0.00 0.00 1191656.72 17039.36 1555118.87 00:30:12.116 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x0 length 0x4ff8 00:30:12.116 Nvme1n1p1 : 5.86 91.15 5.70 0.00 0.00 1278301.30 112116.97 1284102.30 00:30:12.116 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:12.116 Nvme1n1p1 : 5.91 109.58 6.85 0.00 0.00 1081978.20 98001.53 1038896.84 00:30:12.116 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x0 length 0x4ff7 00:30:12.116 Nvme1n1p2 : 5.97 95.78 5.99 0.00 0.00 1173415.21 110503.78 1484138.34 00:30:12.116 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:12.116 Nvme1n1p2 : 5.84 113.61 7.10 0.00 0.00 1015532.25 110503.78 955010.76 00:30:12.116 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x0 length 0x8000 00:30:12.116 Nvme2n1 : 6.11 104.22 6.51 0.00 0.00 1049397.79 27827.59 1122782.92 00:30:12.116 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x8000 length 0x8000 00:30:12.116 Nvme2n1 : 5.98 124.84 7.80 0.00 0.00 919918.80 32263.88 1019538.51 00:30:12.116 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x0 length 0x8000 00:30:12.116 Nvme2n2 : 6.14 94.16 5.89 0.00 0.00 1115020.64 28029.24 2387526.89 00:30:12.116 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x8000 length 0x8000 00:30:12.116 Nvme2n2 : 5.99 124.31 7.77 0.00 0.00 895789.31 32263.88 1032444.06 00:30:12.116 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x0 length 0x8000 00:30:12.116 Nvme2n3 : 6.19 120.75 7.55 0.00 0.00 852151.52 16736.89 2400432.44 00:30:12.116 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x8000 length 0x8000 00:30:12.116 Nvme2n3 : 5.99 128.27 8.02 0.00 0.00 846708.05 34280.37 1051802.39 00:30:12.116 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x0 length 0x2000 00:30:12.116 Nvme3n1 : 6.31 184.33 11.52 0.00 0.00 539443.04 645.91 2477865.75 00:30:12.116 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:12.116 Verification LBA range: start 0x2000 length 0x2000 00:30:12.116 Nvme3n1 : 6.05 143.70 8.98 0.00 0.00 736334.64 1279.21 1064707.94 00:30:12.116 [2024-11-05T15:55:33.478Z] =================================================================================================================== 00:30:12.116 [2024-11-05T15:55:33.478Z] Total : 1626.75 101.67 0.00 0.00 955963.69 645.91 2477865.75 00:30:14.643 00:30:14.643 real 0m9.538s 00:30:14.643 user 0m17.140s 00:30:14.643 sys 0m0.232s 00:30:14.643 15:55:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:14.643 15:55:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:14.643 ************************************ 00:30:14.643 END TEST bdev_verify_big_io 00:30:14.643 ************************************ 00:30:14.643 15:55:35 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:14.643 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:30:14.643 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:14.643 15:55:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:14.643 ************************************ 00:30:14.643 START TEST bdev_write_zeroes 00:30:14.643 ************************************ 00:30:14.643 15:55:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:14.643 [2024-11-05 15:55:35.787236] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:30:14.643 [2024-11-05 15:55:35.787433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62044 ] 00:30:14.643 [2024-11-05 15:55:35.942168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.901 [2024-11-05 15:55:36.043534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.465 Running I/O for 1 seconds... 00:30:16.396 68992.00 IOPS, 269.50 MiB/s 00:30:16.396 Latency(us) 00:30:16.396 [2024-11-05T15:55:37.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.396 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.396 Nvme0n1 : 1.02 9823.10 38.37 0.00 0.00 13000.68 6049.48 24500.38 00:30:16.396 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.396 Nvme1n1p1 : 1.02 9810.96 38.32 0.00 0.00 12997.78 9527.93 24500.38 00:30:16.396 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.396 Nvme1n1p2 : 1.03 9798.76 38.28 0.00 0.00 12984.07 9275.86 23895.43 00:30:16.396 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.396 Nvme2n1 : 1.03 9787.72 38.23 0.00 0.00 12950.73 9527.93 23290.49 00:30:16.396 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.396 Nvme2n2 : 1.03 9776.68 38.19 0.00 0.00 12948.28 9527.93 22786.36 00:30:16.396 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.397 Nvme2n3 : 1.03 9765.65 38.15 0.00 0.00 12933.71 9376.69 22887.19 00:30:16.397 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:16.397 Nvme3n1 : 1.03 9754.70 38.10 0.00 0.00 12927.72 9679.16 24802.86 00:30:16.397 [2024-11-05T15:55:37.759Z] =================================================================================================================== 00:30:16.397 [2024-11-05T15:55:37.759Z] Total : 68517.56 267.65 0.00 0.00 12963.28 6049.48 24802.86 00:30:17.330 00:30:17.330 real 0m2.650s 00:30:17.330 user 0m2.352s 00:30:17.330 sys 0m0.185s 00:30:17.330 15:55:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:17.330 15:55:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:17.330 ************************************ 00:30:17.330 END TEST bdev_write_zeroes 00:30:17.330 ************************************ 00:30:17.330 15:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:17.330 15:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:30:17.330 15:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:17.330 15:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:17.330 ************************************ 00:30:17.330 START TEST bdev_json_nonenclosed 00:30:17.330 ************************************ 00:30:17.330 15:55:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:17.330 [2024-11-05 15:55:38.490767] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:30:17.330 [2024-11-05 15:55:38.490884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62097 ] 00:30:17.330 [2024-11-05 15:55:38.642118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.587 [2024-11-05 15:55:38.740565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.587 [2024-11-05 15:55:38.740637] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:17.587 [2024-11-05 15:55:38.740654] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:17.587 [2024-11-05 15:55:38.740664] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:17.587 00:30:17.587 real 0m0.495s 00:30:17.587 user 0m0.299s 00:30:17.587 sys 0m0.092s 00:30:17.587 15:55:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:17.587 15:55:38 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:17.587 ************************************ 00:30:17.587 END TEST bdev_json_nonenclosed 00:30:17.588 ************************************ 00:30:17.846 15:55:38 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:17.846 15:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:30:17.846 15:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:17.846 15:55:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:17.846 ************************************ 00:30:17.846 START TEST bdev_json_nonarray 00:30:17.846 ************************************ 00:30:17.846 15:55:38 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:17.846 [2024-11-05 15:55:39.029828] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:30:17.846 [2024-11-05 15:55:39.029939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62117 ] 00:30:17.846 [2024-11-05 15:55:39.186960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.104 [2024-11-05 15:55:39.288623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.104 [2024-11-05 15:55:39.288711] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:18.104 [2024-11-05 15:55:39.288729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:18.104 [2024-11-05 15:55:39.288781] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:18.372 00:30:18.372 real 0m0.507s 00:30:18.372 user 0m0.320s 00:30:18.372 sys 0m0.082s 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:18.372 ************************************ 00:30:18.372 END TEST bdev_json_nonarray 00:30:18.372 ************************************ 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:18.372 15:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:30:18.372 15:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:30:18.372 15:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:18.372 15:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:18.372 15:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:18.372 15:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:18.372 ************************************ 00:30:18.372 START TEST bdev_gpt_uuid 00:30:18.372 ************************************ 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62148 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62148 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62148 ']' 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:18.372 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.373 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:18.373 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:18.373 15:55:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:18.373 [2024-11-05 15:55:39.590811] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:30:18.373 [2024-11-05 15:55:39.590928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62148 ] 00:30:18.636 [2024-11-05 15:55:39.749173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.636 [2024-11-05 15:55:39.846155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.202 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:19.202 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:30:19.202 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:19.202 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.202 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:19.468 Some configs were skipped because the RPC state that can call them passed over. 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:30:19.468 { 00:30:19.468 "name": "Nvme1n1p1", 00:30:19.468 "aliases": [ 00:30:19.468 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:19.468 ], 00:30:19.468 "product_name": "GPT Disk", 00:30:19.468 "block_size": 4096, 00:30:19.468 "num_blocks": 655104, 00:30:19.468 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:19.468 "assigned_rate_limits": { 00:30:19.468 "rw_ios_per_sec": 0, 00:30:19.468 "rw_mbytes_per_sec": 0, 00:30:19.468 "r_mbytes_per_sec": 0, 00:30:19.468 "w_mbytes_per_sec": 0 00:30:19.468 }, 00:30:19.468 "claimed": false, 00:30:19.468 "zoned": false, 00:30:19.468 "supported_io_types": { 00:30:19.468 "read": true, 00:30:19.468 "write": true, 00:30:19.468 "unmap": true, 00:30:19.468 "flush": true, 00:30:19.468 "reset": true, 00:30:19.468 "nvme_admin": false, 00:30:19.468 "nvme_io": false, 00:30:19.468 "nvme_io_md": false, 00:30:19.468 "write_zeroes": true, 00:30:19.468 "zcopy": false, 00:30:19.468 "get_zone_info": false, 00:30:19.468 "zone_management": false, 00:30:19.468 "zone_append": false, 00:30:19.468 "compare": true, 00:30:19.468 "compare_and_write": false, 00:30:19.468 "abort": true, 00:30:19.468 "seek_hole": false, 00:30:19.468 "seek_data": false, 00:30:19.468 "copy": true, 00:30:19.468 "nvme_iov_md": false 00:30:19.468 }, 00:30:19.468 "driver_specific": { 00:30:19.468 "gpt": { 00:30:19.468 "base_bdev": "Nvme1n1", 00:30:19.468 "offset_blocks": 256, 00:30:19.468 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:19.468 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:19.468 "partition_name": "SPDK_TEST_first" 00:30:19.468 } 00:30:19.468 } 00:30:19.468 } 00:30:19.468 ]' 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:30:19.468 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:30:19.776 { 00:30:19.776 "name": "Nvme1n1p2", 00:30:19.776 "aliases": [ 00:30:19.776 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:19.776 ], 00:30:19.776 "product_name": "GPT Disk", 00:30:19.776 "block_size": 4096, 00:30:19.776 "num_blocks": 655103, 00:30:19.776 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:19.776 "assigned_rate_limits": { 00:30:19.776 "rw_ios_per_sec": 0, 00:30:19.776 "rw_mbytes_per_sec": 0, 00:30:19.776 "r_mbytes_per_sec": 0, 00:30:19.776 "w_mbytes_per_sec": 0 00:30:19.776 }, 00:30:19.776 "claimed": false, 00:30:19.776 "zoned": false, 00:30:19.776 "supported_io_types": { 00:30:19.776 "read": true, 00:30:19.776 "write": true, 00:30:19.776 "unmap": true, 00:30:19.776 "flush": true, 00:30:19.776 "reset": true, 00:30:19.776 "nvme_admin": false, 00:30:19.776 "nvme_io": false, 00:30:19.776 "nvme_io_md": false, 00:30:19.776 "write_zeroes": true, 00:30:19.776 "zcopy": false, 00:30:19.776 "get_zone_info": false, 00:30:19.776 "zone_management": false, 00:30:19.776 "zone_append": false, 00:30:19.776 "compare": true, 00:30:19.776 "compare_and_write": false, 00:30:19.776 "abort": true, 00:30:19.776 "seek_hole": false, 00:30:19.776 "seek_data": false, 00:30:19.776 "copy": true, 00:30:19.776 "nvme_iov_md": false 00:30:19.776 }, 00:30:19.776 "driver_specific": { 00:30:19.776 "gpt": { 00:30:19.776 "base_bdev": "Nvme1n1", 00:30:19.776 "offset_blocks": 655360, 00:30:19.776 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:19.776 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:19.776 "partition_name": "SPDK_TEST_second" 00:30:19.776 } 00:30:19.776 } 00:30:19.776 } 00:30:19.776 ]' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62148 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62148 ']' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62148 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:19.776 15:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62148 00:30:19.776 15:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:19.776 15:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:19.776 killing process with pid 62148 00:30:19.776 15:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62148' 00:30:19.776 15:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62148 00:30:19.776 15:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62148 00:30:21.151 00:30:21.151 real 0m2.993s 00:30:21.151 user 0m3.125s 00:30:21.151 sys 0m0.377s 00:30:21.151 15:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:21.151 15:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:21.151 ************************************ 00:30:21.151 END TEST bdev_gpt_uuid 00:30:21.151 ************************************ 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:30:21.409 15:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:21.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:21.667 Waiting for block devices as requested 00:30:21.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:21.926 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:21.926 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:21.926 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:27.193 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:27.193 15:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:30:27.193 15:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:30:27.525 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:27.525 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:27.525 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:27.525 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:27.525 15:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:30:27.525 00:30:27.525 real 0m56.011s 00:30:27.525 user 1m11.745s 00:30:27.525 sys 0m7.654s 00:30:27.525 15:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.525 15:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:27.525 ************************************ 00:30:27.525 END TEST blockdev_nvme_gpt 00:30:27.525 ************************************ 00:30:27.525 15:55:48 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:27.525 15:55:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:27.525 15:55:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:27.525 15:55:48 -- common/autotest_common.sh@10 -- # set +x 00:30:27.525 ************************************ 00:30:27.525 START TEST nvme 00:30:27.525 ************************************ 00:30:27.525 15:55:48 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:27.525 * Looking for test storage... 00:30:27.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:27.525 15:55:48 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.525 15:55:48 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:27.525 15:55:48 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.525 15:55:48 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:27.525 15:55:48 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.526 15:55:48 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.526 15:55:48 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.526 15:55:48 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.526 15:55:48 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.526 15:55:48 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.526 15:55:48 nvme -- scripts/common.sh@344 -- # case "$op" in 00:30:27.526 15:55:48 nvme -- scripts/common.sh@345 -- # : 1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.526 15:55:48 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.526 15:55:48 nvme -- scripts/common.sh@365 -- # decimal 1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@353 -- # local d=1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.526 15:55:48 nvme -- scripts/common.sh@355 -- # echo 1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.526 15:55:48 nvme -- scripts/common.sh@366 -- # decimal 2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@353 -- # local d=2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.526 15:55:48 nvme -- scripts/common.sh@355 -- # echo 2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.526 15:55:48 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.526 15:55:48 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.526 15:55:48 nvme -- scripts/common.sh@368 -- # return 0 00:30:27.526 15:55:48 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.526 15:55:48 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:27.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.526 --rc genhtml_branch_coverage=1 00:30:27.526 --rc genhtml_function_coverage=1 00:30:27.526 --rc genhtml_legend=1 00:30:27.526 --rc geninfo_all_blocks=1 00:30:27.526 --rc geninfo_unexecuted_blocks=1 00:30:27.526 00:30:27.526 ' 00:30:27.526 15:55:48 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:27.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.526 --rc genhtml_branch_coverage=1 00:30:27.526 --rc genhtml_function_coverage=1 00:30:27.526 --rc genhtml_legend=1 00:30:27.526 --rc geninfo_all_blocks=1 00:30:27.526 --rc geninfo_unexecuted_blocks=1 00:30:27.526 00:30:27.526 ' 00:30:27.526 15:55:48 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:27.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.526 --rc genhtml_branch_coverage=1 00:30:27.526 --rc genhtml_function_coverage=1 00:30:27.526 --rc genhtml_legend=1 00:30:27.526 --rc geninfo_all_blocks=1 00:30:27.526 --rc geninfo_unexecuted_blocks=1 00:30:27.526 00:30:27.526 ' 00:30:27.526 15:55:48 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:27.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.526 --rc genhtml_branch_coverage=1 00:30:27.526 --rc genhtml_function_coverage=1 00:30:27.526 --rc genhtml_legend=1 00:30:27.526 --rc geninfo_all_blocks=1 00:30:27.526 --rc geninfo_unexecuted_blocks=1 00:30:27.526 00:30:27.526 ' 00:30:27.526 15:55:48 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:27.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:28.350 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:28.350 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:28.350 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:28.350 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:28.350 15:55:49 nvme -- nvme/nvme.sh@79 -- # uname 00:30:28.350 15:55:49 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:28.350 15:55:49 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:28.350 15:55:49 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1073 -- # stubpid=62785 00:30:28.350 Waiting for stub to ready for secondary processes... 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62785 ]] 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:30:28.350 15:55:49 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:28.608 [2024-11-05 15:55:49.724190] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:30:28.609 [2024-11-05 15:55:49.724323] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:30:29.175 [2024-11-05 15:55:50.495471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:29.433 [2024-11-05 15:55:50.588961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:29.433 [2024-11-05 15:55:50.589285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.433 [2024-11-05 15:55:50.589345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.433 [2024-11-05 15:55:50.602903] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:30:29.433 [2024-11-05 15:55:50.603056] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:29.433 [2024-11-05 15:55:50.613283] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:29.433 [2024-11-05 15:55:50.613617] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:29.433 [2024-11-05 15:55:50.615342] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:29.433 [2024-11-05 15:55:50.615555] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:30:29.433 [2024-11-05 15:55:50.617415] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:30:29.433 [2024-11-05 15:55:50.619783] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:29.433 [2024-11-05 15:55:50.619977] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:30:29.433 [2024-11-05 15:55:50.620073] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:30:29.433 [2024-11-05 15:55:50.622558] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:29.433 [2024-11-05 15:55:50.622855] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:30:29.433 [2024-11-05 15:55:50.622935] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:30:29.433 [2024-11-05 15:55:50.622984] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:30:29.433 [2024-11-05 15:55:50.623026] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:30:29.433 done. 00:30:29.433 15:55:50 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:29.433 15:55:50 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:30:29.433 15:55:50 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:29.433 15:55:50 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:30:29.433 15:55:50 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:29.433 15:55:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:29.433 ************************************ 00:30:29.433 START TEST nvme_reset 00:30:29.433 ************************************ 00:30:29.433 15:55:50 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:29.690 Initializing NVMe Controllers 00:30:29.690 Skipping QEMU NVMe SSD at 0000:00:10.0 00:30:29.690 Skipping QEMU NVMe SSD at 0000:00:11.0 00:30:29.690 Skipping QEMU NVMe SSD at 0000:00:13.0 00:30:29.690 Skipping QEMU NVMe SSD at 0000:00:12.0 00:30:29.690 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:29.690 ************************************ 00:30:29.690 END TEST nvme_reset 00:30:29.690 ************************************ 00:30:29.690 00:30:29.690 real 0m0.224s 00:30:29.690 user 0m0.075s 00:30:29.690 sys 0m0.101s 00:30:29.690 15:55:50 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:29.690 15:55:50 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:30:29.690 15:55:50 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:29.690 15:55:50 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:29.690 15:55:50 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:29.690 15:55:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:29.690 ************************************ 00:30:29.691 START TEST nvme_identify 00:30:29.691 ************************************ 00:30:29.691 15:55:50 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:30:29.691 15:55:50 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:30:29.691 15:55:50 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:29.691 15:55:50 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:29.691 15:55:50 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:29.691 15:55:50 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:29.691 15:55:50 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:30:29.691 15:55:50 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:29.691 15:55:50 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:29.691 15:55:50 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:29.691 15:55:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:30:29.691 15:55:51 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:29.691 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:29.951 [2024-11-05 15:55:51.206189] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62806 terminated unexpected 00:30:29.951 ===================================================== 00:30:29.951 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:29.951 ===================================================== 00:30:29.951 Controller Capabilities/Features 00:30:29.951 ================================ 00:30:29.951 Vendor ID: 1b36 00:30:29.951 Subsystem Vendor ID: 1af4 00:30:29.951 Serial Number: 12340 00:30:29.951 Model Number: QEMU NVMe Ctrl 00:30:29.951 Firmware Version: 8.0.0 00:30:29.951 Recommended Arb Burst: 6 00:30:29.951 IEEE OUI Identifier: 00 54 52 00:30:29.951 Multi-path I/O 00:30:29.951 May have multiple subsystem ports: No 00:30:29.951 May have multiple controllers: No 00:30:29.951 Associated with SR-IOV VF: No 00:30:29.951 Max Data Transfer Size: 524288 00:30:29.951 Max Number of Namespaces: 256 00:30:29.951 Max Number of I/O Queues: 64 00:30:29.951 NVMe Specification Version (VS): 1.4 00:30:29.951 NVMe Specification Version (Identify): 1.4 00:30:29.951 Maximum Queue Entries: 2048 00:30:29.951 Contiguous Queues Required: Yes 00:30:29.951 Arbitration Mechanisms Supported 00:30:29.951 Weighted Round Robin: Not Supported 00:30:29.951 Vendor Specific: Not Supported 00:30:29.951 Reset Timeout: 7500 ms 00:30:29.951 Doorbell Stride: 4 bytes 00:30:29.951 NVM Subsystem Reset: Not Supported 00:30:29.951 Command Sets Supported 00:30:29.951 NVM Command Set: Supported 00:30:29.951 Boot Partition: Not Supported 00:30:29.951 Memory Page Size Minimum: 4096 bytes 00:30:29.951 Memory Page Size Maximum: 65536 bytes 00:30:29.951 Persistent Memory Region: Not Supported 00:30:29.951 Optional Asynchronous Events Supported 00:30:29.951 Namespace Attribute Notices: Supported 00:30:29.951 Firmware Activation Notices: Not Supported 00:30:29.951 ANA Change Notices: Not Supported 00:30:29.951 PLE Aggregate Log Change Notices: Not Supported 00:30:29.951 LBA Status Info Alert Notices: Not Supported 00:30:29.951 EGE Aggregate Log Change Notices: Not Supported 00:30:29.951 Normal NVM Subsystem Shutdown event: Not Supported 00:30:29.951 Zone Descriptor Change Notices: Not Supported 00:30:29.951 Discovery Log Change Notices: Not Supported 00:30:29.951 Controller Attributes 00:30:29.951 128-bit Host Identifier: Not Supported 00:30:29.951 Non-Operational Permissive Mode: Not Supported 00:30:29.951 NVM Sets: Not Supported 00:30:29.951 Read Recovery Levels: Not Supported 00:30:29.951 Endurance Groups: Not Supported 00:30:29.951 Predictable Latency Mode: Not Supported 00:30:29.951 Traffic Based Keep ALive: Not Supported 00:30:29.951 Namespace Granularity: Not Supported 00:30:29.951 SQ Associations: Not Supported 00:30:29.951 UUID List: Not Supported 00:30:29.951 Multi-Domain Subsystem: Not Supported 00:30:29.951 Fixed Capacity Management: Not Supported 00:30:29.952 Variable Capacity Management: Not Supported 00:30:29.952 Delete Endurance Group: Not Supported 00:30:29.952 Delete NVM Set: Not Supported 00:30:29.952 Extended LBA Formats Supported: Supported 00:30:29.952 Flexible Data Placement Supported: Not Supported 00:30:29.952 00:30:29.952 Controller Memory Buffer Support 00:30:29.952 ================================ 00:30:29.952 Supported: No 00:30:29.952 00:30:29.952 Persistent Memory Region Support 00:30:29.952 ================================ 00:30:29.952 Supported: No 00:30:29.952 00:30:29.952 Admin Command Set Attributes 00:30:29.952 ============================ 00:30:29.952 Security Send/Receive: Not Supported 00:30:29.952 Format NVM: Supported 00:30:29.952 Firmware Activate/Download: Not Supported 00:30:29.952 Namespace Management: Supported 00:30:29.952 Device Self-Test: Not Supported 00:30:29.952 Directives: Supported 00:30:29.952 NVMe-MI: Not Supported 00:30:29.952 Virtualization Management: Not Supported 00:30:29.952 Doorbell Buffer Config: Supported 00:30:29.952 Get LBA Status Capability: Not Supported 00:30:29.952 Command & Feature Lockdown Capability: Not Supported 00:30:29.952 Abort Command Limit: 4 00:30:29.952 Async Event Request Limit: 4 00:30:29.952 Number of Firmware Slots: N/A 00:30:29.952 Firmware Slot 1 Read-Only: N/A 00:30:29.952 Firmware Activation Without Reset: N/A 00:30:29.952 Multiple Update Detection Support: N/A 00:30:29.952 Firmware Update Granularity: No Information Provided 00:30:29.952 Per-Namespace SMART Log: Yes 00:30:29.952 Asymmetric Namespace Access Log Page: Not Supported 00:30:29.952 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:29.952 Command Effects Log Page: Supported 00:30:29.952 Get Log Page Extended Data: Supported 00:30:29.952 Telemetry Log Pages: Not Supported 00:30:29.952 Persistent Event Log Pages: Not Supported 00:30:29.952 Supported Log Pages Log Page: May Support 00:30:29.952 Commands Supported & Effects Log Page: Not Supported 00:30:29.952 Feature Identifiers & Effects Log Page:May Support 00:30:29.952 NVMe-MI Commands & Effects Log Page: May Support 00:30:29.952 Data Area 4 for Telemetry Log: Not Supported 00:30:29.952 Error Log Page Entries Supported: 1 00:30:29.952 Keep Alive: Not Supported 00:30:29.952 00:30:29.952 NVM Command Set Attributes 00:30:29.952 ========================== 00:30:29.952 Submission Queue Entry Size 00:30:29.952 Max: 64 00:30:29.952 Min: 64 00:30:29.952 Completion Queue Entry Size 00:30:29.952 Max: 16 00:30:29.952 Min: 16 00:30:29.952 Number of Namespaces: 256 00:30:29.952 Compare Command: Supported 00:30:29.952 Write Uncorrectable Command: Not Supported 00:30:29.952 Dataset Management Command: Supported 00:30:29.952 Write Zeroes Command: Supported 00:30:29.952 Set Features Save Field: Supported 00:30:29.952 Reservations: Not Supported 00:30:29.952 Timestamp: Supported 00:30:29.952 Copy: Supported 00:30:29.952 Volatile Write Cache: Present 00:30:29.952 Atomic Write Unit (Normal): 1 00:30:29.952 Atomic Write Unit (PFail): 1 00:30:29.952 Atomic Compare & Write Unit: 1 00:30:29.952 Fused Compare & Write: Not Supported 00:30:29.952 Scatter-Gather List 00:30:29.952 SGL Command Set: Supported 00:30:29.952 SGL Keyed: Not Supported 00:30:29.952 SGL Bit Bucket Descriptor: Not Supported 00:30:29.952 SGL Metadata Pointer: Not Supported 00:30:29.952 Oversized SGL: Not Supported 00:30:29.952 SGL Metadata Address: Not Supported 00:30:29.952 SGL Offset: Not Supported 00:30:29.952 Transport SGL Data Block: Not Supported 00:30:29.952 Replay Protected Memory Block: Not Supported 00:30:29.952 00:30:29.952 Firmware Slot Information 00:30:29.952 ========================= 00:30:29.952 Active slot: 1 00:30:29.952 Slot 1 Firmware Revision: 1.0 00:30:29.952 00:30:29.952 00:30:29.952 Commands Supported and Effects 00:30:29.952 ============================== 00:30:29.952 Admin Commands 00:30:29.952 -------------- 00:30:29.952 Delete I/O Submission Queue (00h): Supported 00:30:29.952 Create I/O Submission Queue (01h): Supported 00:30:29.952 Get Log Page (02h): Supported 00:30:29.952 Delete I/O Completion Queue (04h): Supported 00:30:29.952 Create I/O Completion Queue (05h): Supported 00:30:29.952 Identify (06h): Supported 00:30:29.952 Abort (08h): Supported 00:30:29.952 Set Features (09h): Supported 00:30:29.952 Get Features (0Ah): Supported 00:30:29.952 Asynchronous Event Request (0Ch): Supported 00:30:29.952 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:29.952 Directive Send (19h): Supported 00:30:29.952 Directive Receive (1Ah): Supported 00:30:29.952 Virtualization Management (1Ch): Supported 00:30:29.952 Doorbell Buffer Config (7Ch): Supported 00:30:29.952 Format NVM (80h): Supported LBA-Change 00:30:29.952 I/O Commands 00:30:29.952 ------------ 00:30:29.952 Flush (00h): Supported LBA-Change 00:30:29.952 Write (01h): Supported LBA-Change 00:30:29.952 Read (02h): Supported 00:30:29.952 Compare (05h): Supported 00:30:29.952 Write Zeroes (08h): Supported LBA-Change 00:30:29.952 Dataset Management (09h): Supported LBA-Change 00:30:29.952 Unknown (0Ch): Supported 00:30:29.952 Unknown (12h): Supported 00:30:29.952 Copy (19h): Supported LBA-Change 00:30:29.952 Unknown (1Dh): Supported LBA-Change 00:30:29.952 00:30:29.952 Error Log 00:30:29.952 ========= 00:30:29.952 00:30:29.952 Arbitration 00:30:29.952 =========== 00:30:29.952 Arbitration Burst: no limit 00:30:29.952 00:30:29.952 Power Management 00:30:29.952 ================ 00:30:29.952 Number of Power States: 1 00:30:29.952 Current Power State: Power State #0 00:30:29.952 Power State #0: 00:30:29.952 Max Power: 25.00 W 00:30:29.952 Non-Operational State: Operational 00:30:29.952 Entry Latency: 16 microseconds 00:30:29.952 Exit Latency: 4 microseconds 00:30:29.952 Relative Read Throughput: 0 00:30:29.952 Relative Read Latency: 0 00:30:29.952 Relative Write Throughput: 0 00:30:29.952 Relative Write Latency: 0 00:30:29.952 Idle Power[2024-11-05 15:55:51.207522] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62806 terminated unexpected 00:30:29.952 : Not Reported 00:30:29.952 Active Power: Not Reported 00:30:29.952 Non-Operational Permissive Mode: Not Supported 00:30:29.952 00:30:29.952 Health Information 00:30:29.952 ================== 00:30:29.952 Critical Warnings: 00:30:29.952 Available Spare Space: OK 00:30:29.952 Temperature: OK 00:30:29.952 Device Reliability: OK 00:30:29.952 Read Only: No 00:30:29.952 Volatile Memory Backup: OK 00:30:29.952 Current Temperature: 323 Kelvin (50 Celsius) 00:30:29.952 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:29.952 Available Spare: 0% 00:30:29.952 Available Spare Threshold: 0% 00:30:29.952 Life Percentage Used: 0% 00:30:29.952 Data Units Read: 699 00:30:29.952 Data Units Written: 627 00:30:29.952 Host Read Commands: 38896 00:30:29.952 Host Write Commands: 38682 00:30:29.952 Controller Busy Time: 0 minutes 00:30:29.952 Power Cycles: 0 00:30:29.952 Power On Hours: 0 hours 00:30:29.952 Unsafe Shutdowns: 0 00:30:29.952 Unrecoverable Media Errors: 0 00:30:29.952 Lifetime Error Log Entries: 0 00:30:29.952 Warning Temperature Time: 0 minutes 00:30:29.952 Critical Temperature Time: 0 minutes 00:30:29.952 00:30:29.952 Number of Queues 00:30:29.952 ================ 00:30:29.952 Number of I/O Submission Queues: 64 00:30:29.952 Number of I/O Completion Queues: 64 00:30:29.952 00:30:29.952 ZNS Specific Controller Data 00:30:29.952 ============================ 00:30:29.952 Zone Append Size Limit: 0 00:30:29.952 00:30:29.952 00:30:29.952 Active Namespaces 00:30:29.952 ================= 00:30:29.952 Namespace ID:1 00:30:29.952 Error Recovery Timeout: Unlimited 00:30:29.952 Command Set Identifier: NVM (00h) 00:30:29.952 Deallocate: Supported 00:30:29.952 Deallocated/Unwritten Error: Supported 00:30:29.952 Deallocated Read Value: All 0x00 00:30:29.952 Deallocate in Write Zeroes: Not Supported 00:30:29.952 Deallocated Guard Field: 0xFFFF 00:30:29.952 Flush: Supported 00:30:29.952 Reservation: Not Supported 00:30:29.952 Metadata Transferred as: Separate Metadata Buffer 00:30:29.952 Namespace Sharing Capabilities: Private 00:30:29.952 Size (in LBAs): 1548666 (5GiB) 00:30:29.952 Capacity (in LBAs): 1548666 (5GiB) 00:30:29.952 Utilization (in LBAs): 1548666 (5GiB) 00:30:29.952 Thin Provisioning: Not Supported 00:30:29.952 Per-NS Atomic Units: No 00:30:29.952 Maximum Single Source Range Length: 128 00:30:29.952 Maximum Copy Length: 128 00:30:29.952 Maximum Source Range Count: 128 00:30:29.952 NGUID/EUI64 Never Reused: No 00:30:29.952 Namespace Write Protected: No 00:30:29.952 Number of LBA Formats: 8 00:30:29.953 Current LBA Format: LBA Format #07 00:30:29.953 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:29.953 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:29.953 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:29.953 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:29.953 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:29.953 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:29.953 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:29.953 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:29.953 00:30:29.953 NVM Specific Namespace Data 00:30:29.953 =========================== 00:30:29.953 Logical Block Storage Tag Mask: 0 00:30:29.953 Protection Information Capabilities: 00:30:29.953 16b Guard Protection Information Storage Tag Support: No 00:30:29.953 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:29.953 Storage Tag Check Read Support: No 00:30:29.953 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.953 ===================================================== 00:30:29.953 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:29.953 ===================================================== 00:30:29.953 Controller Capabilities/Features 00:30:29.953 ================================ 00:30:29.953 Vendor ID: 1b36 00:30:29.953 Subsystem Vendor ID: 1af4 00:30:29.953 Serial Number: 12341 00:30:29.953 Model Number: QEMU NVMe Ctrl 00:30:29.953 Firmware Version: 8.0.0 00:30:29.953 Recommended Arb Burst: 6 00:30:29.953 IEEE OUI Identifier: 00 54 52 00:30:29.953 Multi-path I/O 00:30:29.953 May have multiple subsystem ports: No 00:30:29.953 May have multiple controllers: No 00:30:29.953 Associated with SR-IOV VF: No 00:30:29.953 Max Data Transfer Size: 524288 00:30:29.953 Max Number of Namespaces: 256 00:30:29.953 Max Number of I/O Queues: 64 00:30:29.953 NVMe Specification Version (VS): 1.4 00:30:29.953 NVMe Specification Version (Identify): 1.4 00:30:29.953 Maximum Queue Entries: 2048 00:30:29.953 Contiguous Queues Required: Yes 00:30:29.953 Arbitration Mechanisms Supported 00:30:29.953 Weighted Round Robin: Not Supported 00:30:29.953 Vendor Specific: Not Supported 00:30:29.953 Reset Timeout: 7500 ms 00:30:29.953 Doorbell Stride: 4 bytes 00:30:29.953 NVM Subsystem Reset: Not Supported 00:30:29.953 Command Sets Supported 00:30:29.953 NVM Command Set: Supported 00:30:29.953 Boot Partition: Not Supported 00:30:29.953 Memory Page Size Minimum: 4096 bytes 00:30:29.953 Memory Page Size Maximum: 65536 bytes 00:30:29.953 Persistent Memory Region: Not Supported 00:30:29.953 Optional Asynchronous Events Supported 00:30:29.953 Namespace Attribute Notices: Supported 00:30:29.953 Firmware Activation Notices: Not Supported 00:30:29.953 ANA Change Notices: Not Supported 00:30:29.953 PLE Aggregate Log Change Notices: Not Supported 00:30:29.953 LBA Status Info Alert Notices: Not Supported 00:30:29.953 EGE Aggregate Log Change Notices: Not Supported 00:30:29.953 Normal NVM Subsystem Shutdown event: Not Supported 00:30:29.953 Zone Descriptor Change Notices: Not Supported 00:30:29.953 Discovery Log Change Notices: Not Supported 00:30:29.953 Controller Attributes 00:30:29.953 128-bit Host Identifier: Not Supported 00:30:29.953 Non-Operational Permissive Mode: Not Supported 00:30:29.953 NVM Sets: Not Supported 00:30:29.953 Read Recovery Levels: Not Supported 00:30:29.953 Endurance Groups: Not Supported 00:30:29.953 Predictable Latency Mode: Not Supported 00:30:29.953 Traffic Based Keep ALive: Not Supported 00:30:29.953 Namespace Granularity: Not Supported 00:30:29.953 SQ Associations: Not Supported 00:30:29.953 UUID List: Not Supported 00:30:29.953 Multi-Domain Subsystem: Not Supported 00:30:29.953 Fixed Capacity Management: Not Supported 00:30:29.953 Variable Capacity Management: Not Supported 00:30:29.953 Delete Endurance Group: Not Supported 00:30:29.953 Delete NVM Set: Not Supported 00:30:29.953 Extended LBA Formats Supported: Supported 00:30:29.953 Flexible Data Placement Supported: Not Supported 00:30:29.953 00:30:29.953 Controller Memory Buffer Support 00:30:29.953 ================================ 00:30:29.953 Supported: No 00:30:29.953 00:30:29.953 Persistent Memory Region Support 00:30:29.953 ================================ 00:30:29.953 Supported: No 00:30:29.953 00:30:29.953 Admin Command Set Attributes 00:30:29.953 ============================ 00:30:29.953 Security Send/Receive: Not Supported 00:30:29.953 Format NVM: Supported 00:30:29.953 Firmware Activate/Download: Not Supported 00:30:29.953 Namespace Management: Supported 00:30:29.953 Device Self-Test: Not Supported 00:30:29.953 Directives: Supported 00:30:29.953 NVMe-MI: Not Supported 00:30:29.953 Virtualization Management: Not Supported 00:30:29.953 Doorbell Buffer Config: Supported 00:30:29.953 Get LBA Status Capability: Not Supported 00:30:29.953 Command & Feature Lockdown Capability: Not Supported 00:30:29.953 Abort Command Limit: 4 00:30:29.953 Async Event Request Limit: 4 00:30:29.953 Number of Firmware Slots: N/A 00:30:29.953 Firmware Slot 1 Read-Only: N/A 00:30:29.953 Firmware Activation Without Reset: N/A 00:30:29.953 Multiple Update Detection Support: N/A 00:30:29.953 Firmware Update Granularity: No Information Provided 00:30:29.953 Per-Namespace SMART Log: Yes 00:30:29.953 Asymmetric Namespace Access Log Page: Not Supported 00:30:29.953 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:29.953 Command Effects Log Page: Supported 00:30:29.953 Get Log Page Extended Data: Supported 00:30:29.953 Telemetry Log Pages: Not Supported 00:30:29.953 Persistent Event Log Pages: Not Supported 00:30:29.953 Supported Log Pages Log Page: May Support 00:30:29.953 Commands Supported & Effects Log Page: Not Supported 00:30:29.953 Feature Identifiers & Effects Log Page:May Support 00:30:29.953 NVMe-MI Commands & Effects Log Page: May Support 00:30:29.953 Data Area 4 for Telemetry Log: Not Supported 00:30:29.953 Error Log Page Entries Supported: 1 00:30:29.953 Keep Alive: Not Supported 00:30:29.953 00:30:29.953 NVM Command Set Attributes 00:30:29.953 ========================== 00:30:29.953 Submission Queue Entry Size 00:30:29.953 Max: 64 00:30:29.953 Min: 64 00:30:29.953 Completion Queue Entry Size 00:30:29.953 Max: 16 00:30:29.953 Min: 16 00:30:29.953 Number of Namespaces: 256 00:30:29.953 Compare Command: Supported 00:30:29.953 Write Uncorrectable Command: Not Supported 00:30:29.953 Dataset Management Command: Supported 00:30:29.953 Write Zeroes Command: Supported 00:30:29.953 Set Features Save Field: Supported 00:30:29.953 Reservations: Not Supported 00:30:29.953 Timestamp: Supported 00:30:29.953 Copy: Supported 00:30:29.953 Volatile Write Cache: Present 00:30:29.953 Atomic Write Unit (Normal): 1 00:30:29.953 Atomic Write Unit (PFail): 1 00:30:29.953 Atomic Compare & Write Unit: 1 00:30:29.953 Fused Compare & Write: Not Supported 00:30:29.953 Scatter-Gather List 00:30:29.953 SGL Command Set: Supported 00:30:29.953 SGL Keyed: Not Supported 00:30:29.953 SGL Bit Bucket Descriptor: Not Supported 00:30:29.953 SGL Metadata Pointer: Not Supported 00:30:29.953 Oversized SGL: Not Supported 00:30:29.953 SGL Metadata Address: Not Supported 00:30:29.953 SGL Offset: Not Supported 00:30:29.953 Transport SGL Data Block: Not Supported 00:30:29.953 Replay Protected Memory Block: Not Supported 00:30:29.953 00:30:29.953 Firmware Slot Information 00:30:29.953 ========================= 00:30:29.953 Active slot: 1 00:30:29.953 Slot 1 Firmware Revision: 1.0 00:30:29.953 00:30:29.953 00:30:29.953 Commands Supported and Effects 00:30:29.953 ============================== 00:30:29.953 Admin Commands 00:30:29.953 -------------- 00:30:29.953 Delete I/O Submission Queue (00h): Supported 00:30:29.953 Create I/O Submission Queue (01h): Supported 00:30:29.953 Get Log Page (02h): Supported 00:30:29.953 Delete I/O Completion Queue (04h): Supported 00:30:29.953 Create I/O Completion Queue (05h): Supported 00:30:29.953 Identify (06h): Supported 00:30:29.953 Abort (08h): Supported 00:30:29.953 Set Features (09h): Supported 00:30:29.953 Get Features (0Ah): Supported 00:30:29.953 Asynchronous Event Request (0Ch): Supported 00:30:29.953 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:29.953 Directive Send (19h): Supported 00:30:29.953 Directive Receive (1Ah): Supported 00:30:29.954 Virtualization Management (1Ch): Supported 00:30:29.954 Doorbell Buffer Config (7Ch): Supported 00:30:29.954 Format NVM (80h): Supported LBA-Change 00:30:29.954 I/O Commands 00:30:29.954 ------------ 00:30:29.954 Flush (00h): Supported LBA-Change 00:30:29.954 Write (01h): Supported LBA-Change 00:30:29.954 Read (02h): Supported 00:30:29.954 Compare (05h): Supported 00:30:29.954 Write Zeroes (08h): Supported LBA-Change 00:30:29.954 Dataset Management (09h): Supported LBA-Change 00:30:29.954 Unknown (0Ch): Supported 00:30:29.954 Unknown (12h): Supported 00:30:29.954 Copy (19h): Supported LBA-Change 00:30:29.954 Unknown (1Dh): Supported LBA-Change 00:30:29.954 00:30:29.954 Error Log 00:30:29.954 ========= 00:30:29.954 00:30:29.954 Arbitration 00:30:29.954 =========== 00:30:29.954 Arbitration Burst: no limit 00:30:29.954 00:30:29.954 Power Management 00:30:29.954 ================ 00:30:29.954 Number of Power States: 1 00:30:29.954 Current Power State: Power State #0 00:30:29.954 Power State #0: 00:30:29.954 Max Power: 25.00 W 00:30:29.954 Non-Operational State: Operational 00:30:29.954 Entry Latency: 16 microseconds 00:30:29.954 Exit Latency: 4 microseconds 00:30:29.954 Relative Read Throughput: 0 00:30:29.954 Relative Read Latency: 0 00:30:29.954 Relative Write Throughput: 0 00:30:29.954 Relative Write Latency: 0 00:30:29.954 Idle Power: Not Reported 00:30:29.954 Active Power: Not Reported 00:30:29.954 Non-Operational Permissive Mode: Not Supported 00:30:29.954 00:30:29.954 Health Information 00:30:29.954 ================== 00:30:29.954 Critical Warnings: 00:30:29.954 Available Spare Space: OK 00:30:29.954 Temperature: [2024-11-05 15:55:51.208285] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62806 terminated unexpected 00:30:29.954 OK 00:30:29.954 Device Reliability: OK 00:30:29.954 Read Only: No 00:30:29.954 Volatile Memory Backup: OK 00:30:29.954 Current Temperature: 323 Kelvin (50 Celsius) 00:30:29.954 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:29.954 Available Spare: 0% 00:30:29.954 Available Spare Threshold: 0% 00:30:29.954 Life Percentage Used: 0% 00:30:29.954 Data Units Read: 1068 00:30:29.954 Data Units Written: 935 00:30:29.954 Host Read Commands: 57362 00:30:29.954 Host Write Commands: 56156 00:30:29.954 Controller Busy Time: 0 minutes 00:30:29.954 Power Cycles: 0 00:30:29.954 Power On Hours: 0 hours 00:30:29.954 Unsafe Shutdowns: 0 00:30:29.954 Unrecoverable Media Errors: 0 00:30:29.954 Lifetime Error Log Entries: 0 00:30:29.954 Warning Temperature Time: 0 minutes 00:30:29.954 Critical Temperature Time: 0 minutes 00:30:29.954 00:30:29.954 Number of Queues 00:30:29.954 ================ 00:30:29.954 Number of I/O Submission Queues: 64 00:30:29.954 Number of I/O Completion Queues: 64 00:30:29.954 00:30:29.954 ZNS Specific Controller Data 00:30:29.954 ============================ 00:30:29.954 Zone Append Size Limit: 0 00:30:29.954 00:30:29.954 00:30:29.954 Active Namespaces 00:30:29.954 ================= 00:30:29.954 Namespace ID:1 00:30:29.954 Error Recovery Timeout: Unlimited 00:30:29.954 Command Set Identifier: NVM (00h) 00:30:29.954 Deallocate: Supported 00:30:29.954 Deallocated/Unwritten Error: Supported 00:30:29.954 Deallocated Read Value: All 0x00 00:30:29.954 Deallocate in Write Zeroes: Not Supported 00:30:29.954 Deallocated Guard Field: 0xFFFF 00:30:29.954 Flush: Supported 00:30:29.954 Reservation: Not Supported 00:30:29.954 Namespace Sharing Capabilities: Private 00:30:29.954 Size (in LBAs): 1310720 (5GiB) 00:30:29.954 Capacity (in LBAs): 1310720 (5GiB) 00:30:29.954 Utilization (in LBAs): 1310720 (5GiB) 00:30:29.954 Thin Provisioning: Not Supported 00:30:29.954 Per-NS Atomic Units: No 00:30:29.954 Maximum Single Source Range Length: 128 00:30:29.954 Maximum Copy Length: 128 00:30:29.954 Maximum Source Range Count: 128 00:30:29.954 NGUID/EUI64 Never Reused: No 00:30:29.954 Namespace Write Protected: No 00:30:29.954 Number of LBA Formats: 8 00:30:29.954 Current LBA Format: LBA Format #04 00:30:29.954 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:29.954 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:29.954 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:29.954 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:29.954 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:29.954 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:29.954 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:29.954 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:29.954 00:30:29.954 NVM Specific Namespace Data 00:30:29.954 =========================== 00:30:29.954 Logical Block Storage Tag Mask: 0 00:30:29.954 Protection Information Capabilities: 00:30:29.954 16b Guard Protection Information Storage Tag Support: No 00:30:29.954 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:29.954 Storage Tag Check Read Support: No 00:30:29.954 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.954 ===================================================== 00:30:29.954 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:29.954 ===================================================== 00:30:29.954 Controller Capabilities/Features 00:30:29.954 ================================ 00:30:29.954 Vendor ID: 1b36 00:30:29.954 Subsystem Vendor ID: 1af4 00:30:29.954 Serial Number: 12343 00:30:29.954 Model Number: QEMU NVMe Ctrl 00:30:29.954 Firmware Version: 8.0.0 00:30:29.954 Recommended Arb Burst: 6 00:30:29.954 IEEE OUI Identifier: 00 54 52 00:30:29.954 Multi-path I/O 00:30:29.954 May have multiple subsystem ports: No 00:30:29.954 May have multiple controllers: Yes 00:30:29.954 Associated with SR-IOV VF: No 00:30:29.954 Max Data Transfer Size: 524288 00:30:29.954 Max Number of Namespaces: 256 00:30:29.954 Max Number of I/O Queues: 64 00:30:29.954 NVMe Specification Version (VS): 1.4 00:30:29.954 NVMe Specification Version (Identify): 1.4 00:30:29.954 Maximum Queue Entries: 2048 00:30:29.954 Contiguous Queues Required: Yes 00:30:29.954 Arbitration Mechanisms Supported 00:30:29.954 Weighted Round Robin: Not Supported 00:30:29.954 Vendor Specific: Not Supported 00:30:29.954 Reset Timeout: 7500 ms 00:30:29.954 Doorbell Stride: 4 bytes 00:30:29.954 NVM Subsystem Reset: Not Supported 00:30:29.954 Command Sets Supported 00:30:29.954 NVM Command Set: Supported 00:30:29.954 Boot Partition: Not Supported 00:30:29.954 Memory Page Size Minimum: 4096 bytes 00:30:29.954 Memory Page Size Maximum: 65536 bytes 00:30:29.954 Persistent Memory Region: Not Supported 00:30:29.954 Optional Asynchronous Events Supported 00:30:29.954 Namespace Attribute Notices: Supported 00:30:29.954 Firmware Activation Notices: Not Supported 00:30:29.954 ANA Change Notices: Not Supported 00:30:29.954 PLE Aggregate Log Change Notices: Not Supported 00:30:29.954 LBA Status Info Alert Notices: Not Supported 00:30:29.954 EGE Aggregate Log Change Notices: Not Supported 00:30:29.954 Normal NVM Subsystem Shutdown event: Not Supported 00:30:29.954 Zone Descriptor Change Notices: Not Supported 00:30:29.954 Discovery Log Change Notices: Not Supported 00:30:29.954 Controller Attributes 00:30:29.954 128-bit Host Identifier: Not Supported 00:30:29.954 Non-Operational Permissive Mode: Not Supported 00:30:29.954 NVM Sets: Not Supported 00:30:29.954 Read Recovery Levels: Not Supported 00:30:29.954 Endurance Groups: Supported 00:30:29.954 Predictable Latency Mode: Not Supported 00:30:29.954 Traffic Based Keep ALive: Not Supported 00:30:29.954 Namespace Granularity: Not Supported 00:30:29.954 SQ Associations: Not Supported 00:30:29.954 UUID List: Not Supported 00:30:29.954 Multi-Domain Subsystem: Not Supported 00:30:29.954 Fixed Capacity Management: Not Supported 00:30:29.954 Variable Capacity Management: Not Supported 00:30:29.954 Delete Endurance Group: Not Supported 00:30:29.954 Delete NVM Set: Not Supported 00:30:29.954 Extended LBA Formats Supported: Supported 00:30:29.954 Flexible Data Placement Supported: Supported 00:30:29.954 00:30:29.954 Controller Memory Buffer Support 00:30:29.954 ================================ 00:30:29.954 Supported: No 00:30:29.954 00:30:29.954 Persistent Memory Region Support 00:30:29.954 ================================ 00:30:29.955 Supported: No 00:30:29.955 00:30:29.955 Admin Command Set Attributes 00:30:29.955 ============================ 00:30:29.955 Security Send/Receive: Not Supported 00:30:29.955 Format NVM: Supported 00:30:29.955 Firmware Activate/Download: Not Supported 00:30:29.955 Namespace Management: Supported 00:30:29.955 Device Self-Test: Not Supported 00:30:29.955 Directives: Supported 00:30:29.955 NVMe-MI: Not Supported 00:30:29.955 Virtualization Management: Not Supported 00:30:29.955 Doorbell Buffer Config: Supported 00:30:29.955 Get LBA Status Capability: Not Supported 00:30:29.955 Command & Feature Lockdown Capability: Not Supported 00:30:29.955 Abort Command Limit: 4 00:30:29.955 Async Event Request Limit: 4 00:30:29.955 Number of Firmware Slots: N/A 00:30:29.955 Firmware Slot 1 Read-Only: N/A 00:30:29.955 Firmware Activation Without Reset: N/A 00:30:29.955 Multiple Update Detection Support: N/A 00:30:29.955 Firmware Update Granularity: No Information Provided 00:30:29.955 Per-Namespace SMART Log: Yes 00:30:29.955 Asymmetric Namespace Access Log Page: Not Supported 00:30:29.955 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:29.955 Command Effects Log Page: Supported 00:30:29.955 Get Log Page Extended Data: Supported 00:30:29.955 Telemetry Log Pages: Not Supported 00:30:29.955 Persistent Event Log Pages: Not Supported 00:30:29.955 Supported Log Pages Log Page: May Support 00:30:29.955 Commands Supported & Effects Log Page: Not Supported 00:30:29.955 Feature Identifiers & Effects Log Page:May Support 00:30:29.955 NVMe-MI Commands & Effects Log Page: May Support 00:30:29.955 Data Area 4 for Telemetry Log: Not Supported 00:30:29.955 Error Log Page Entries Supported: 1 00:30:29.955 Keep Alive: Not Supported 00:30:29.955 00:30:29.955 NVM Command Set Attributes 00:30:29.955 ========================== 00:30:29.955 Submission Queue Entry Size 00:30:29.955 Max: 64 00:30:29.955 Min: 64 00:30:29.955 Completion Queue Entry Size 00:30:29.955 Max: 16 00:30:29.955 Min: 16 00:30:29.955 Number of Namespaces: 256 00:30:29.955 Compare Command: Supported 00:30:29.955 Write Uncorrectable Command: Not Supported 00:30:29.955 Dataset Management Command: Supported 00:30:29.955 Write Zeroes Command: Supported 00:30:29.955 Set Features Save Field: Supported 00:30:29.955 Reservations: Not Supported 00:30:29.955 Timestamp: Supported 00:30:29.955 Copy: Supported 00:30:29.955 Volatile Write Cache: Present 00:30:29.955 Atomic Write Unit (Normal): 1 00:30:29.955 Atomic Write Unit (PFail): 1 00:30:29.955 Atomic Compare & Write Unit: 1 00:30:29.955 Fused Compare & Write: Not Supported 00:30:29.955 Scatter-Gather List 00:30:29.955 SGL Command Set: Supported 00:30:29.955 SGL Keyed: Not Supported 00:30:29.955 SGL Bit Bucket Descriptor: Not Supported 00:30:29.955 SGL Metadata Pointer: Not Supported 00:30:29.955 Oversized SGL: Not Supported 00:30:29.955 SGL Metadata Address: Not Supported 00:30:29.955 SGL Offset: Not Supported 00:30:29.955 Transport SGL Data Block: Not Supported 00:30:29.955 Replay Protected Memory Block: Not Supported 00:30:29.955 00:30:29.955 Firmware Slot Information 00:30:29.955 ========================= 00:30:29.955 Active slot: 1 00:30:29.955 Slot 1 Firmware Revision: 1.0 00:30:29.955 00:30:29.955 00:30:29.955 Commands Supported and Effects 00:30:29.955 ============================== 00:30:29.955 Admin Commands 00:30:29.955 -------------- 00:30:29.955 Delete I/O Submission Queue (00h): Supported 00:30:29.955 Create I/O Submission Queue (01h): Supported 00:30:29.955 Get Log Page (02h): Supported 00:30:29.955 Delete I/O Completion Queue (04h): Supported 00:30:29.955 Create I/O Completion Queue (05h): Supported 00:30:29.955 Identify (06h): Supported 00:30:29.955 Abort (08h): Supported 00:30:29.955 Set Features (09h): Supported 00:30:29.955 Get Features (0Ah): Supported 00:30:29.955 Asynchronous Event Request (0Ch): Supported 00:30:29.955 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:29.955 Directive Send (19h): Supported 00:30:29.955 Directive Receive (1Ah): Supported 00:30:29.955 Virtualization Management (1Ch): Supported 00:30:29.955 Doorbell Buffer Config (7Ch): Supported 00:30:29.955 Format NVM (80h): Supported LBA-Change 00:30:29.955 I/O Commands 00:30:29.955 ------------ 00:30:29.955 Flush (00h): Supported LBA-Change 00:30:29.955 Write (01h): Supported LBA-Change 00:30:29.955 Read (02h): Supported 00:30:29.955 Compare (05h): Supported 00:30:29.955 Write Zeroes (08h): Supported LBA-Change 00:30:29.955 Dataset Management (09h): Supported LBA-Change 00:30:29.955 Unknown (0Ch): Supported 00:30:29.955 Unknown (12h): Supported 00:30:29.955 Copy (19h): Supported LBA-Change 00:30:29.955 Unknown (1Dh): Supported LBA-Change 00:30:29.955 00:30:29.955 Error Log 00:30:29.955 ========= 00:30:29.955 00:30:29.955 Arbitration 00:30:29.955 =========== 00:30:29.955 Arbitration Burst: no limit 00:30:29.955 00:30:29.955 Power Management 00:30:29.955 ================ 00:30:29.955 Number of Power States: 1 00:30:29.955 Current Power State: Power State #0 00:30:29.955 Power State #0: 00:30:29.955 Max Power: 25.00 W 00:30:29.955 Non-Operational State: Operational 00:30:29.955 Entry Latency: 16 microseconds 00:30:29.955 Exit Latency: 4 microseconds 00:30:29.955 Relative Read Throughput: 0 00:30:29.955 Relative Read Latency: 0 00:30:29.955 Relative Write Throughput: 0 00:30:29.955 Relative Write Latency: 0 00:30:29.955 Idle Power: Not Reported 00:30:29.955 Active Power: Not Reported 00:30:29.955 Non-Operational Permissive Mode: Not Supported 00:30:29.955 00:30:29.955 Health Information 00:30:29.955 ================== 00:30:29.955 Critical Warnings: 00:30:29.955 Available Spare Space: OK 00:30:29.955 Temperature: OK 00:30:29.955 Device Reliability: OK 00:30:29.955 Read Only: No 00:30:29.955 Volatile Memory Backup: OK 00:30:29.955 Current Temperature: 323 Kelvin (50 Celsius) 00:30:29.955 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:29.955 Available Spare: 0% 00:30:29.955 Available Spare Threshold: 0% 00:30:29.955 Life Percentage Used: 0% 00:30:29.955 Data Units Read: 937 00:30:29.955 Data Units Written: 866 00:30:29.955 Host Read Commands: 41176 00:30:29.955 Host Write Commands: 40599 00:30:29.955 Controller Busy Time: 0 minutes 00:30:29.955 Power Cycles: 0 00:30:29.955 Power On Hours: 0 hours 00:30:29.955 Unsafe Shutdowns: 0 00:30:29.955 Unrecoverable Media Errors: 0 00:30:29.955 Lifetime Error Log Entries: 0 00:30:29.955 Warning Temperature Time: 0 minutes 00:30:29.955 Critical Temperature Time: 0 minutes 00:30:29.955 00:30:29.955 Number of Queues 00:30:29.955 ================ 00:30:29.955 Number of I/O Submission Queues: 64 00:30:29.955 Number of I/O Completion Queues: 64 00:30:29.955 00:30:29.955 ZNS Specific Controller Data 00:30:29.955 ============================ 00:30:29.955 Zone Append Size Limit: 0 00:30:29.955 00:30:29.955 00:30:29.955 Active Namespaces 00:30:29.955 ================= 00:30:29.955 Namespace ID:1 00:30:29.955 Error Recovery Timeout: Unlimited 00:30:29.955 Command Set Identifier: NVM (00h) 00:30:29.955 Deallocate: Supported 00:30:29.955 Deallocated/Unwritten Error: Supported 00:30:29.955 Deallocated Read Value: All 0x00 00:30:29.955 Deallocate in Write Zeroes: Not Supported 00:30:29.955 Deallocated Guard Field: 0xFFFF 00:30:29.955 Flush: Supported 00:30:29.955 Reservation: Not Supported 00:30:29.955 Namespace Sharing Capabilities: Multiple Controllers 00:30:29.955 Size (in LBAs): 262144 (1GiB) 00:30:29.955 Capacity (in LBAs): 262144 (1GiB) 00:30:29.955 Utilization (in LBAs): 262144 (1GiB) 00:30:29.955 Thin Provisioning: Not Supported 00:30:29.955 Per-NS Atomic Units: No 00:30:29.955 Maximum Single Source Range Length: 128 00:30:29.955 Maximum Copy Length: 128 00:30:29.955 Maximum Source Range Count: 128 00:30:29.955 NGUID/EUI64 Never Reused: No 00:30:29.955 Namespace Write Protected: No 00:30:29.955 Endurance group ID: 1 00:30:29.955 Number of LBA Formats: 8 00:30:29.955 Current LBA Format: LBA Format #04 00:30:29.955 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:29.955 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:29.955 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:29.955 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:29.955 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:29.955 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:29.955 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:29.955 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:29.955 00:30:29.955 Get Feature FDP: 00:30:29.955 ================ 00:30:29.955 Enabled: Yes 00:30:29.955 FDP configuration index: 0 00:30:29.956 00:30:29.956 FDP configurations log page 00:30:29.956 =========================== 00:30:29.956 Number of FDP configurations: 1 00:30:29.956 Version: 0 00:30:29.956 Size: 112 00:30:29.956 FDP Configuration Descriptor: 0 00:30:29.956 Descriptor Size: 96 00:30:29.956 Reclaim Group Identifier format: 2 00:30:29.956 FDP Volatile Write Cache: Not Present 00:30:29.956 FDP Configuration: Valid 00:30:29.956 Vendor Specific Size: 0 00:30:29.956 Number of Reclaim Groups: 2 00:30:29.956 Number of Recalim Unit Handles: 8 00:30:29.956 Max Placement Identifiers: 128 00:30:29.956 Number of Namespaces Suppprted: 256 00:30:29.956 Reclaim unit Nominal Size: 6000000 bytes 00:30:29.956 Estimated Reclaim Unit Time Limit: Not Reported 00:30:29.956 RUH Desc #000: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #001: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #002: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #003: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #004: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #005: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #006: RUH Type: Initially Isolated 00:30:29.956 RUH Desc #007: RUH Type: Initially Isolated 00:30:29.956 00:30:29.956 FDP reclaim unit handle usage log page 00:30:29.956 ====================================== 00:30:29.956 Number of Reclaim Unit Handles: 8 00:30:29.956 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:29.956 RUH Usage Desc #001: RUH Attributes: Unused 00:30:29.956 RUH Usage Desc #002: RUH Attributes: Unused 00:30:29.956 RUH Usage Desc #003: RUH Attributes: Unused 00:30:29.956 RUH Usage Desc #004: RUH Attributes: Unused 00:30:29.956 RUH Usage Desc #005: RUH Attributes: Unused 00:30:29.956 RUH Usage Desc #006: RUH Attributes: Unused 00:30:29.956 RUH Usage Desc #007: RUH Attributes: Unused 00:30:29.956 00:30:29.956 FDP statistics log page 00:30:29.956 ======================= 00:30:29.956 Host bytes with metadata written: 527736832 00:30:29.956 Medi[2024-11-05 15:55:51.209523] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62806 terminated unexpected 00:30:29.956 a bytes with metadata written: 527794176 00:30:29.956 Media bytes erased: 0 00:30:29.956 00:30:29.956 FDP events log page 00:30:29.956 =================== 00:30:29.956 Number of FDP events: 0 00:30:29.956 00:30:29.956 NVM Specific Namespace Data 00:30:29.956 =========================== 00:30:29.956 Logical Block Storage Tag Mask: 0 00:30:29.956 Protection Information Capabilities: 00:30:29.956 16b Guard Protection Information Storage Tag Support: No 00:30:29.956 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:29.956 Storage Tag Check Read Support: No 00:30:29.956 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.956 ===================================================== 00:30:29.956 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:29.956 ===================================================== 00:30:29.956 Controller Capabilities/Features 00:30:29.956 ================================ 00:30:29.956 Vendor ID: 1b36 00:30:29.956 Subsystem Vendor ID: 1af4 00:30:29.956 Serial Number: 12342 00:30:29.956 Model Number: QEMU NVMe Ctrl 00:30:29.956 Firmware Version: 8.0.0 00:30:29.956 Recommended Arb Burst: 6 00:30:29.956 IEEE OUI Identifier: 00 54 52 00:30:29.956 Multi-path I/O 00:30:29.956 May have multiple subsystem ports: No 00:30:29.956 May have multiple controllers: No 00:30:29.956 Associated with SR-IOV VF: No 00:30:29.956 Max Data Transfer Size: 524288 00:30:29.956 Max Number of Namespaces: 256 00:30:29.956 Max Number of I/O Queues: 64 00:30:29.956 NVMe Specification Version (VS): 1.4 00:30:29.956 NVMe Specification Version (Identify): 1.4 00:30:29.956 Maximum Queue Entries: 2048 00:30:29.956 Contiguous Queues Required: Yes 00:30:29.956 Arbitration Mechanisms Supported 00:30:29.956 Weighted Round Robin: Not Supported 00:30:29.956 Vendor Specific: Not Supported 00:30:29.956 Reset Timeout: 7500 ms 00:30:29.956 Doorbell Stride: 4 bytes 00:30:29.956 NVM Subsystem Reset: Not Supported 00:30:29.956 Command Sets Supported 00:30:29.956 NVM Command Set: Supported 00:30:29.956 Boot Partition: Not Supported 00:30:29.956 Memory Page Size Minimum: 4096 bytes 00:30:29.956 Memory Page Size Maximum: 65536 bytes 00:30:29.956 Persistent Memory Region: Not Supported 00:30:29.956 Optional Asynchronous Events Supported 00:30:29.956 Namespace Attribute Notices: Supported 00:30:29.956 Firmware Activation Notices: Not Supported 00:30:29.956 ANA Change Notices: Not Supported 00:30:29.956 PLE Aggregate Log Change Notices: Not Supported 00:30:29.956 LBA Status Info Alert Notices: Not Supported 00:30:29.956 EGE Aggregate Log Change Notices: Not Supported 00:30:29.956 Normal NVM Subsystem Shutdown event: Not Supported 00:30:29.956 Zone Descriptor Change Notices: Not Supported 00:30:29.956 Discovery Log Change Notices: Not Supported 00:30:29.956 Controller Attributes 00:30:29.956 128-bit Host Identifier: Not Supported 00:30:29.956 Non-Operational Permissive Mode: Not Supported 00:30:29.956 NVM Sets: Not Supported 00:30:29.956 Read Recovery Levels: Not Supported 00:30:29.956 Endurance Groups: Not Supported 00:30:29.956 Predictable Latency Mode: Not Supported 00:30:29.956 Traffic Based Keep ALive: Not Supported 00:30:29.956 Namespace Granularity: Not Supported 00:30:29.956 SQ Associations: Not Supported 00:30:29.956 UUID List: Not Supported 00:30:29.956 Multi-Domain Subsystem: Not Supported 00:30:29.956 Fixed Capacity Management: Not Supported 00:30:29.956 Variable Capacity Management: Not Supported 00:30:29.956 Delete Endurance Group: Not Supported 00:30:29.956 Delete NVM Set: Not Supported 00:30:29.956 Extended LBA Formats Supported: Supported 00:30:29.956 Flexible Data Placement Supported: Not Supported 00:30:29.956 00:30:29.956 Controller Memory Buffer Support 00:30:29.956 ================================ 00:30:29.956 Supported: No 00:30:29.956 00:30:29.956 Persistent Memory Region Support 00:30:29.956 ================================ 00:30:29.956 Supported: No 00:30:29.956 00:30:29.956 Admin Command Set Attributes 00:30:29.956 ============================ 00:30:29.956 Security Send/Receive: Not Supported 00:30:29.956 Format NVM: Supported 00:30:29.956 Firmware Activate/Download: Not Supported 00:30:29.956 Namespace Management: Supported 00:30:29.956 Device Self-Test: Not Supported 00:30:29.956 Directives: Supported 00:30:29.956 NVMe-MI: Not Supported 00:30:29.956 Virtualization Management: Not Supported 00:30:29.956 Doorbell Buffer Config: Supported 00:30:29.956 Get LBA Status Capability: Not Supported 00:30:29.956 Command & Feature Lockdown Capability: Not Supported 00:30:29.956 Abort Command Limit: 4 00:30:29.956 Async Event Request Limit: 4 00:30:29.956 Number of Firmware Slots: N/A 00:30:29.956 Firmware Slot 1 Read-Only: N/A 00:30:29.956 Firmware Activation Without Reset: N/A 00:30:29.956 Multiple Update Detection Support: N/A 00:30:29.956 Firmware Update Granularity: No Information Provided 00:30:29.956 Per-Namespace SMART Log: Yes 00:30:29.956 Asymmetric Namespace Access Log Page: Not Supported 00:30:29.956 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:29.956 Command Effects Log Page: Supported 00:30:29.956 Get Log Page Extended Data: Supported 00:30:29.956 Telemetry Log Pages: Not Supported 00:30:29.957 Persistent Event Log Pages: Not Supported 00:30:29.957 Supported Log Pages Log Page: May Support 00:30:29.957 Commands Supported & Effects Log Page: Not Supported 00:30:29.957 Feature Identifiers & Effects Log Page:May Support 00:30:29.957 NVMe-MI Commands & Effects Log Page: May Support 00:30:29.957 Data Area 4 for Telemetry Log: Not Supported 00:30:29.957 Error Log Page Entries Supported: 1 00:30:29.957 Keep Alive: Not Supported 00:30:29.957 00:30:29.957 NVM Command Set Attributes 00:30:29.957 ========================== 00:30:29.957 Submission Queue Entry Size 00:30:29.957 Max: 64 00:30:29.957 Min: 64 00:30:29.957 Completion Queue Entry Size 00:30:29.957 Max: 16 00:30:29.957 Min: 16 00:30:29.957 Number of Namespaces: 256 00:30:29.957 Compare Command: Supported 00:30:29.957 Write Uncorrectable Command: Not Supported 00:30:29.957 Dataset Management Command: Supported 00:30:29.957 Write Zeroes Command: Supported 00:30:29.957 Set Features Save Field: Supported 00:30:29.957 Reservations: Not Supported 00:30:29.957 Timestamp: Supported 00:30:29.957 Copy: Supported 00:30:29.957 Volatile Write Cache: Present 00:30:29.957 Atomic Write Unit (Normal): 1 00:30:29.957 Atomic Write Unit (PFail): 1 00:30:29.957 Atomic Compare & Write Unit: 1 00:30:29.957 Fused Compare & Write: Not Supported 00:30:29.957 Scatter-Gather List 00:30:29.957 SGL Command Set: Supported 00:30:29.957 SGL Keyed: Not Supported 00:30:29.957 SGL Bit Bucket Descriptor: Not Supported 00:30:29.957 SGL Metadata Pointer: Not Supported 00:30:29.957 Oversized SGL: Not Supported 00:30:29.957 SGL Metadata Address: Not Supported 00:30:29.957 SGL Offset: Not Supported 00:30:29.957 Transport SGL Data Block: Not Supported 00:30:29.957 Replay Protected Memory Block: Not Supported 00:30:29.957 00:30:29.957 Firmware Slot Information 00:30:29.957 ========================= 00:30:29.957 Active slot: 1 00:30:29.957 Slot 1 Firmware Revision: 1.0 00:30:29.957 00:30:29.957 00:30:29.957 Commands Supported and Effects 00:30:29.957 ============================== 00:30:29.957 Admin Commands 00:30:29.957 -------------- 00:30:29.957 Delete I/O Submission Queue (00h): Supported 00:30:29.957 Create I/O Submission Queue (01h): Supported 00:30:29.957 Get Log Page (02h): Supported 00:30:29.957 Delete I/O Completion Queue (04h): Supported 00:30:29.957 Create I/O Completion Queue (05h): Supported 00:30:29.957 Identify (06h): Supported 00:30:29.957 Abort (08h): Supported 00:30:29.957 Set Features (09h): Supported 00:30:29.957 Get Features (0Ah): Supported 00:30:29.957 Asynchronous Event Request (0Ch): Supported 00:30:29.957 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:29.957 Directive Send (19h): Supported 00:30:29.957 Directive Receive (1Ah): Supported 00:30:29.957 Virtualization Management (1Ch): Supported 00:30:29.957 Doorbell Buffer Config (7Ch): Supported 00:30:29.957 Format NVM (80h): Supported LBA-Change 00:30:29.957 I/O Commands 00:30:29.957 ------------ 00:30:29.957 Flush (00h): Supported LBA-Change 00:30:29.957 Write (01h): Supported LBA-Change 00:30:29.957 Read (02h): Supported 00:30:29.957 Compare (05h): Supported 00:30:29.957 Write Zeroes (08h): Supported LBA-Change 00:30:29.957 Dataset Management (09h): Supported LBA-Change 00:30:29.957 Unknown (0Ch): Supported 00:30:29.957 Unknown (12h): Supported 00:30:29.957 Copy (19h): Supported LBA-Change 00:30:29.957 Unknown (1Dh): Supported LBA-Change 00:30:29.957 00:30:29.957 Error Log 00:30:29.957 ========= 00:30:29.957 00:30:29.957 Arbitration 00:30:29.957 =========== 00:30:29.957 Arbitration Burst: no limit 00:30:29.957 00:30:29.957 Power Management 00:30:29.957 ================ 00:30:29.957 Number of Power States: 1 00:30:29.957 Current Power State: Power State #0 00:30:29.957 Power State #0: 00:30:29.957 Max Power: 25.00 W 00:30:29.957 Non-Operational State: Operational 00:30:29.957 Entry Latency: 16 microseconds 00:30:29.957 Exit Latency: 4 microseconds 00:30:29.957 Relative Read Throughput: 0 00:30:29.957 Relative Read Latency: 0 00:30:29.957 Relative Write Throughput: 0 00:30:29.957 Relative Write Latency: 0 00:30:29.957 Idle Power: Not Reported 00:30:29.957 Active Power: Not Reported 00:30:29.957 Non-Operational Permissive Mode: Not Supported 00:30:29.957 00:30:29.957 Health Information 00:30:29.957 ================== 00:30:29.957 Critical Warnings: 00:30:29.957 Available Spare Space: OK 00:30:29.957 Temperature: OK 00:30:29.957 Device Reliability: OK 00:30:29.957 Read Only: No 00:30:29.957 Volatile Memory Backup: OK 00:30:29.957 Current Temperature: 323 Kelvin (50 Celsius) 00:30:29.957 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:29.957 Available Spare: 0% 00:30:29.957 Available Spare Threshold: 0% 00:30:29.957 Life Percentage Used: 0% 00:30:29.957 Data Units Read: 2293 00:30:29.957 Data Units Written: 2080 00:30:29.957 Host Read Commands: 119314 00:30:29.957 Host Write Commands: 117584 00:30:29.957 Controller Busy Time: 0 minutes 00:30:29.957 Power Cycles: 0 00:30:29.957 Power On Hours: 0 hours 00:30:29.957 Unsafe Shutdowns: 0 00:30:29.957 Unrecoverable Media Errors: 0 00:30:29.957 Lifetime Error Log Entries: 0 00:30:29.957 Warning Temperature Time: 0 minutes 00:30:29.957 Critical Temperature Time: 0 minutes 00:30:29.957 00:30:29.957 Number of Queues 00:30:29.957 ================ 00:30:29.957 Number of I/O Submission Queues: 64 00:30:29.957 Number of I/O Completion Queues: 64 00:30:29.957 00:30:29.957 ZNS Specific Controller Data 00:30:29.957 ============================ 00:30:29.957 Zone Append Size Limit: 0 00:30:29.957 00:30:29.957 00:30:29.957 Active Namespaces 00:30:29.957 ================= 00:30:29.957 Namespace ID:1 00:30:29.957 Error Recovery Timeout: Unlimited 00:30:29.957 Command Set Identifier: NVM (00h) 00:30:29.957 Deallocate: Supported 00:30:29.957 Deallocated/Unwritten Error: Supported 00:30:29.957 Deallocated Read Value: All 0x00 00:30:29.957 Deallocate in Write Zeroes: Not Supported 00:30:29.957 Deallocated Guard Field: 0xFFFF 00:30:29.957 Flush: Supported 00:30:29.957 Reservation: Not Supported 00:30:29.957 Namespace Sharing Capabilities: Private 00:30:29.957 Size (in LBAs): 1048576 (4GiB) 00:30:29.957 Capacity (in LBAs): 1048576 (4GiB) 00:30:29.957 Utilization (in LBAs): 1048576 (4GiB) 00:30:29.957 Thin Provisioning: Not Supported 00:30:29.957 Per-NS Atomic Units: No 00:30:29.957 Maximum Single Source Range Length: 128 00:30:29.957 Maximum Copy Length: 128 00:30:29.957 Maximum Source Range Count: 128 00:30:29.957 NGUID/EUI64 Never Reused: No 00:30:29.957 Namespace Write Protected: No 00:30:29.957 Number of LBA Formats: 8 00:30:29.957 Current LBA Format: LBA Format #04 00:30:29.957 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:29.957 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:29.957 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:29.957 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:29.957 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:29.957 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:29.957 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:29.957 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:29.957 00:30:29.957 NVM Specific Namespace Data 00:30:29.957 =========================== 00:30:29.957 Logical Block Storage Tag Mask: 0 00:30:29.957 Protection Information Capabilities: 00:30:29.957 16b Guard Protection Information Storage Tag Support: No 00:30:29.957 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:29.957 Storage Tag Check Read Support: No 00:30:29.957 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.957 Namespace ID:2 00:30:29.957 Error Recovery Timeout: Unlimited 00:30:29.957 Command Set Identifier: NVM (00h) 00:30:29.957 Deallocate: Supported 00:30:29.957 Deallocated/Unwritten Error: Supported 00:30:29.957 Deallocated Read Value: All 0x00 00:30:29.957 Deallocate in Write Zeroes: Not Supported 00:30:29.957 Deallocated Guard Field: 0xFFFF 00:30:29.957 Flush: Supported 00:30:29.957 Reservation: Not Supported 00:30:29.957 Namespace Sharing Capabilities: Private 00:30:29.957 Size (in LBAs): 1048576 (4GiB) 00:30:29.957 Capacity (in LBAs): 1048576 (4GiB) 00:30:29.957 Utilization (in LBAs): 1048576 (4GiB) 00:30:29.957 Thin Provisioning: Not Supported 00:30:29.958 Per-NS Atomic Units: No 00:30:29.958 Maximum Single Source Range Length: 128 00:30:29.958 Maximum Copy Length: 128 00:30:29.958 Maximum Source Range Count: 128 00:30:29.958 NGUID/EUI64 Never Reused: No 00:30:29.958 Namespace Write Protected: No 00:30:29.958 Number of LBA Formats: 8 00:30:29.958 Current LBA Format: LBA Format #04 00:30:29.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:29.958 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:29.958 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:29.958 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:29.958 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:29.958 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:29.958 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:29.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:29.958 00:30:29.958 NVM Specific Namespace Data 00:30:29.958 =========================== 00:30:29.958 Logical Block Storage Tag Mask: 0 00:30:29.958 Protection Information Capabilities: 00:30:29.958 16b Guard Protection Information Storage Tag Support: No 00:30:29.958 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:29.958 Storage Tag Check Read Support: No 00:30:29.958 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Namespace ID:3 00:30:29.958 Error Recovery Timeout: Unlimited 00:30:29.958 Command Set Identifier: NVM (00h) 00:30:29.958 Deallocate: Supported 00:30:29.958 Deallocated/Unwritten Error: Supported 00:30:29.958 Deallocated Read Value: All 0x00 00:30:29.958 Deallocate in Write Zeroes: Not Supported 00:30:29.958 Deallocated Guard Field: 0xFFFF 00:30:29.958 Flush: Supported 00:30:29.958 Reservation: Not Supported 00:30:29.958 Namespace Sharing Capabilities: Private 00:30:29.958 Size (in LBAs): 1048576 (4GiB) 00:30:29.958 Capacity (in LBAs): 1048576 (4GiB) 00:30:29.958 Utilization (in LBAs): 1048576 (4GiB) 00:30:29.958 Thin Provisioning: Not Supported 00:30:29.958 Per-NS Atomic Units: No 00:30:29.958 Maximum Single Source Range Length: 128 00:30:29.958 Maximum Copy Length: 128 00:30:29.958 Maximum Source Range Count: 128 00:30:29.958 NGUID/EUI64 Never Reused: No 00:30:29.958 Namespace Write Protected: No 00:30:29.958 Number of LBA Formats: 8 00:30:29.958 Current LBA Format: LBA Format #04 00:30:29.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:29.958 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:29.958 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:29.958 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:29.958 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:29.958 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:29.958 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:29.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:29.958 00:30:29.958 NVM Specific Namespace Data 00:30:29.958 =========================== 00:30:29.958 Logical Block Storage Tag Mask: 0 00:30:29.958 Protection Information Capabilities: 00:30:29.958 16b Guard Protection Information Storage Tag Support: No 00:30:29.958 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:29.958 Storage Tag Check Read Support: No 00:30:29.958 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:29.958 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:29.958 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:30.217 ===================================================== 00:30:30.217 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:30.217 ===================================================== 00:30:30.217 Controller Capabilities/Features 00:30:30.217 ================================ 00:30:30.217 Vendor ID: 1b36 00:30:30.217 Subsystem Vendor ID: 1af4 00:30:30.217 Serial Number: 12340 00:30:30.217 Model Number: QEMU NVMe Ctrl 00:30:30.217 Firmware Version: 8.0.0 00:30:30.217 Recommended Arb Burst: 6 00:30:30.217 IEEE OUI Identifier: 00 54 52 00:30:30.217 Multi-path I/O 00:30:30.217 May have multiple subsystem ports: No 00:30:30.217 May have multiple controllers: No 00:30:30.217 Associated with SR-IOV VF: No 00:30:30.217 Max Data Transfer Size: 524288 00:30:30.217 Max Number of Namespaces: 256 00:30:30.217 Max Number of I/O Queues: 64 00:30:30.217 NVMe Specification Version (VS): 1.4 00:30:30.217 NVMe Specification Version (Identify): 1.4 00:30:30.217 Maximum Queue Entries: 2048 00:30:30.217 Contiguous Queues Required: Yes 00:30:30.217 Arbitration Mechanisms Supported 00:30:30.217 Weighted Round Robin: Not Supported 00:30:30.217 Vendor Specific: Not Supported 00:30:30.217 Reset Timeout: 7500 ms 00:30:30.217 Doorbell Stride: 4 bytes 00:30:30.217 NVM Subsystem Reset: Not Supported 00:30:30.217 Command Sets Supported 00:30:30.217 NVM Command Set: Supported 00:30:30.217 Boot Partition: Not Supported 00:30:30.217 Memory Page Size Minimum: 4096 bytes 00:30:30.217 Memory Page Size Maximum: 65536 bytes 00:30:30.217 Persistent Memory Region: Not Supported 00:30:30.217 Optional Asynchronous Events Supported 00:30:30.217 Namespace Attribute Notices: Supported 00:30:30.217 Firmware Activation Notices: Not Supported 00:30:30.217 ANA Change Notices: Not Supported 00:30:30.217 PLE Aggregate Log Change Notices: Not Supported 00:30:30.217 LBA Status Info Alert Notices: Not Supported 00:30:30.217 EGE Aggregate Log Change Notices: Not Supported 00:30:30.217 Normal NVM Subsystem Shutdown event: Not Supported 00:30:30.217 Zone Descriptor Change Notices: Not Supported 00:30:30.217 Discovery Log Change Notices: Not Supported 00:30:30.217 Controller Attributes 00:30:30.217 128-bit Host Identifier: Not Supported 00:30:30.217 Non-Operational Permissive Mode: Not Supported 00:30:30.217 NVM Sets: Not Supported 00:30:30.217 Read Recovery Levels: Not Supported 00:30:30.217 Endurance Groups: Not Supported 00:30:30.217 Predictable Latency Mode: Not Supported 00:30:30.217 Traffic Based Keep ALive: Not Supported 00:30:30.217 Namespace Granularity: Not Supported 00:30:30.217 SQ Associations: Not Supported 00:30:30.217 UUID List: Not Supported 00:30:30.217 Multi-Domain Subsystem: Not Supported 00:30:30.217 Fixed Capacity Management: Not Supported 00:30:30.217 Variable Capacity Management: Not Supported 00:30:30.217 Delete Endurance Group: Not Supported 00:30:30.218 Delete NVM Set: Not Supported 00:30:30.218 Extended LBA Formats Supported: Supported 00:30:30.218 Flexible Data Placement Supported: Not Supported 00:30:30.218 00:30:30.218 Controller Memory Buffer Support 00:30:30.218 ================================ 00:30:30.218 Supported: No 00:30:30.218 00:30:30.218 Persistent Memory Region Support 00:30:30.218 ================================ 00:30:30.218 Supported: No 00:30:30.218 00:30:30.218 Admin Command Set Attributes 00:30:30.218 ============================ 00:30:30.218 Security Send/Receive: Not Supported 00:30:30.218 Format NVM: Supported 00:30:30.218 Firmware Activate/Download: Not Supported 00:30:30.218 Namespace Management: Supported 00:30:30.218 Device Self-Test: Not Supported 00:30:30.218 Directives: Supported 00:30:30.218 NVMe-MI: Not Supported 00:30:30.218 Virtualization Management: Not Supported 00:30:30.218 Doorbell Buffer Config: Supported 00:30:30.218 Get LBA Status Capability: Not Supported 00:30:30.218 Command & Feature Lockdown Capability: Not Supported 00:30:30.218 Abort Command Limit: 4 00:30:30.218 Async Event Request Limit: 4 00:30:30.218 Number of Firmware Slots: N/A 00:30:30.218 Firmware Slot 1 Read-Only: N/A 00:30:30.218 Firmware Activation Without Reset: N/A 00:30:30.218 Multiple Update Detection Support: N/A 00:30:30.218 Firmware Update Granularity: No Information Provided 00:30:30.218 Per-Namespace SMART Log: Yes 00:30:30.218 Asymmetric Namespace Access Log Page: Not Supported 00:30:30.218 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:30.218 Command Effects Log Page: Supported 00:30:30.218 Get Log Page Extended Data: Supported 00:30:30.218 Telemetry Log Pages: Not Supported 00:30:30.218 Persistent Event Log Pages: Not Supported 00:30:30.218 Supported Log Pages Log Page: May Support 00:30:30.218 Commands Supported & Effects Log Page: Not Supported 00:30:30.218 Feature Identifiers & Effects Log Page:May Support 00:30:30.218 NVMe-MI Commands & Effects Log Page: May Support 00:30:30.218 Data Area 4 for Telemetry Log: Not Supported 00:30:30.218 Error Log Page Entries Supported: 1 00:30:30.218 Keep Alive: Not Supported 00:30:30.218 00:30:30.218 NVM Command Set Attributes 00:30:30.218 ========================== 00:30:30.218 Submission Queue Entry Size 00:30:30.218 Max: 64 00:30:30.218 Min: 64 00:30:30.218 Completion Queue Entry Size 00:30:30.218 Max: 16 00:30:30.218 Min: 16 00:30:30.218 Number of Namespaces: 256 00:30:30.218 Compare Command: Supported 00:30:30.218 Write Uncorrectable Command: Not Supported 00:30:30.218 Dataset Management Command: Supported 00:30:30.218 Write Zeroes Command: Supported 00:30:30.218 Set Features Save Field: Supported 00:30:30.218 Reservations: Not Supported 00:30:30.218 Timestamp: Supported 00:30:30.218 Copy: Supported 00:30:30.218 Volatile Write Cache: Present 00:30:30.218 Atomic Write Unit (Normal): 1 00:30:30.218 Atomic Write Unit (PFail): 1 00:30:30.218 Atomic Compare & Write Unit: 1 00:30:30.218 Fused Compare & Write: Not Supported 00:30:30.218 Scatter-Gather List 00:30:30.218 SGL Command Set: Supported 00:30:30.218 SGL Keyed: Not Supported 00:30:30.218 SGL Bit Bucket Descriptor: Not Supported 00:30:30.218 SGL Metadata Pointer: Not Supported 00:30:30.218 Oversized SGL: Not Supported 00:30:30.218 SGL Metadata Address: Not Supported 00:30:30.218 SGL Offset: Not Supported 00:30:30.218 Transport SGL Data Block: Not Supported 00:30:30.218 Replay Protected Memory Block: Not Supported 00:30:30.218 00:30:30.218 Firmware Slot Information 00:30:30.218 ========================= 00:30:30.218 Active slot: 1 00:30:30.218 Slot 1 Firmware Revision: 1.0 00:30:30.218 00:30:30.218 00:30:30.218 Commands Supported and Effects 00:30:30.218 ============================== 00:30:30.218 Admin Commands 00:30:30.218 -------------- 00:30:30.218 Delete I/O Submission Queue (00h): Supported 00:30:30.218 Create I/O Submission Queue (01h): Supported 00:30:30.218 Get Log Page (02h): Supported 00:30:30.218 Delete I/O Completion Queue (04h): Supported 00:30:30.218 Create I/O Completion Queue (05h): Supported 00:30:30.218 Identify (06h): Supported 00:30:30.218 Abort (08h): Supported 00:30:30.218 Set Features (09h): Supported 00:30:30.218 Get Features (0Ah): Supported 00:30:30.218 Asynchronous Event Request (0Ch): Supported 00:30:30.218 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:30.218 Directive Send (19h): Supported 00:30:30.218 Directive Receive (1Ah): Supported 00:30:30.218 Virtualization Management (1Ch): Supported 00:30:30.218 Doorbell Buffer Config (7Ch): Supported 00:30:30.218 Format NVM (80h): Supported LBA-Change 00:30:30.218 I/O Commands 00:30:30.218 ------------ 00:30:30.218 Flush (00h): Supported LBA-Change 00:30:30.218 Write (01h): Supported LBA-Change 00:30:30.218 Read (02h): Supported 00:30:30.218 Compare (05h): Supported 00:30:30.218 Write Zeroes (08h): Supported LBA-Change 00:30:30.218 Dataset Management (09h): Supported LBA-Change 00:30:30.218 Unknown (0Ch): Supported 00:30:30.218 Unknown (12h): Supported 00:30:30.218 Copy (19h): Supported LBA-Change 00:30:30.218 Unknown (1Dh): Supported LBA-Change 00:30:30.218 00:30:30.218 Error Log 00:30:30.218 ========= 00:30:30.218 00:30:30.218 Arbitration 00:30:30.218 =========== 00:30:30.218 Arbitration Burst: no limit 00:30:30.218 00:30:30.218 Power Management 00:30:30.218 ================ 00:30:30.218 Number of Power States: 1 00:30:30.218 Current Power State: Power State #0 00:30:30.218 Power State #0: 00:30:30.218 Max Power: 25.00 W 00:30:30.218 Non-Operational State: Operational 00:30:30.218 Entry Latency: 16 microseconds 00:30:30.218 Exit Latency: 4 microseconds 00:30:30.218 Relative Read Throughput: 0 00:30:30.218 Relative Read Latency: 0 00:30:30.218 Relative Write Throughput: 0 00:30:30.218 Relative Write Latency: 0 00:30:30.218 Idle Power: Not Reported 00:30:30.218 Active Power: Not Reported 00:30:30.218 Non-Operational Permissive Mode: Not Supported 00:30:30.218 00:30:30.218 Health Information 00:30:30.218 ================== 00:30:30.218 Critical Warnings: 00:30:30.218 Available Spare Space: OK 00:30:30.218 Temperature: OK 00:30:30.218 Device Reliability: OK 00:30:30.218 Read Only: No 00:30:30.218 Volatile Memory Backup: OK 00:30:30.218 Current Temperature: 323 Kelvin (50 Celsius) 00:30:30.218 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:30.218 Available Spare: 0% 00:30:30.218 Available Spare Threshold: 0% 00:30:30.218 Life Percentage Used: 0% 00:30:30.218 Data Units Read: 699 00:30:30.218 Data Units Written: 627 00:30:30.218 Host Read Commands: 38896 00:30:30.218 Host Write Commands: 38682 00:30:30.218 Controller Busy Time: 0 minutes 00:30:30.218 Power Cycles: 0 00:30:30.218 Power On Hours: 0 hours 00:30:30.218 Unsafe Shutdowns: 0 00:30:30.218 Unrecoverable Media Errors: 0 00:30:30.218 Lifetime Error Log Entries: 0 00:30:30.218 Warning Temperature Time: 0 minutes 00:30:30.218 Critical Temperature Time: 0 minutes 00:30:30.218 00:30:30.218 Number of Queues 00:30:30.218 ================ 00:30:30.218 Number of I/O Submission Queues: 64 00:30:30.218 Number of I/O Completion Queues: 64 00:30:30.218 00:30:30.218 ZNS Specific Controller Data 00:30:30.218 ============================ 00:30:30.218 Zone Append Size Limit: 0 00:30:30.218 00:30:30.218 00:30:30.218 Active Namespaces 00:30:30.218 ================= 00:30:30.218 Namespace ID:1 00:30:30.218 Error Recovery Timeout: Unlimited 00:30:30.218 Command Set Identifier: NVM (00h) 00:30:30.218 Deallocate: Supported 00:30:30.218 Deallocated/Unwritten Error: Supported 00:30:30.218 Deallocated Read Value: All 0x00 00:30:30.218 Deallocate in Write Zeroes: Not Supported 00:30:30.218 Deallocated Guard Field: 0xFFFF 00:30:30.218 Flush: Supported 00:30:30.218 Reservation: Not Supported 00:30:30.218 Metadata Transferred as: Separate Metadata Buffer 00:30:30.218 Namespace Sharing Capabilities: Private 00:30:30.218 Size (in LBAs): 1548666 (5GiB) 00:30:30.218 Capacity (in LBAs): 1548666 (5GiB) 00:30:30.218 Utilization (in LBAs): 1548666 (5GiB) 00:30:30.218 Thin Provisioning: Not Supported 00:30:30.218 Per-NS Atomic Units: No 00:30:30.218 Maximum Single Source Range Length: 128 00:30:30.218 Maximum Copy Length: 128 00:30:30.218 Maximum Source Range Count: 128 00:30:30.218 NGUID/EUI64 Never Reused: No 00:30:30.218 Namespace Write Protected: No 00:30:30.218 Number of LBA Formats: 8 00:30:30.218 Current LBA Format: LBA Format #07 00:30:30.218 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:30.218 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:30.218 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:30.218 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:30.218 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:30.218 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:30.219 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:30.219 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:30.219 00:30:30.219 NVM Specific Namespace Data 00:30:30.219 =========================== 00:30:30.219 Logical Block Storage Tag Mask: 0 00:30:30.219 Protection Information Capabilities: 00:30:30.219 16b Guard Protection Information Storage Tag Support: No 00:30:30.219 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:30.219 Storage Tag Check Read Support: No 00:30:30.219 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.219 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:30.219 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:30:30.503 ===================================================== 00:30:30.503 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:30.503 ===================================================== 00:30:30.503 Controller Capabilities/Features 00:30:30.503 ================================ 00:30:30.503 Vendor ID: 1b36 00:30:30.503 Subsystem Vendor ID: 1af4 00:30:30.503 Serial Number: 12341 00:30:30.503 Model Number: QEMU NVMe Ctrl 00:30:30.503 Firmware Version: 8.0.0 00:30:30.503 Recommended Arb Burst: 6 00:30:30.503 IEEE OUI Identifier: 00 54 52 00:30:30.503 Multi-path I/O 00:30:30.503 May have multiple subsystem ports: No 00:30:30.503 May have multiple controllers: No 00:30:30.503 Associated with SR-IOV VF: No 00:30:30.503 Max Data Transfer Size: 524288 00:30:30.503 Max Number of Namespaces: 256 00:30:30.503 Max Number of I/O Queues: 64 00:30:30.503 NVMe Specification Version (VS): 1.4 00:30:30.503 NVMe Specification Version (Identify): 1.4 00:30:30.503 Maximum Queue Entries: 2048 00:30:30.503 Contiguous Queues Required: Yes 00:30:30.503 Arbitration Mechanisms Supported 00:30:30.503 Weighted Round Robin: Not Supported 00:30:30.503 Vendor Specific: Not Supported 00:30:30.503 Reset Timeout: 7500 ms 00:30:30.503 Doorbell Stride: 4 bytes 00:30:30.503 NVM Subsystem Reset: Not Supported 00:30:30.503 Command Sets Supported 00:30:30.503 NVM Command Set: Supported 00:30:30.503 Boot Partition: Not Supported 00:30:30.503 Memory Page Size Minimum: 4096 bytes 00:30:30.503 Memory Page Size Maximum: 65536 bytes 00:30:30.503 Persistent Memory Region: Not Supported 00:30:30.503 Optional Asynchronous Events Supported 00:30:30.503 Namespace Attribute Notices: Supported 00:30:30.503 Firmware Activation Notices: Not Supported 00:30:30.503 ANA Change Notices: Not Supported 00:30:30.503 PLE Aggregate Log Change Notices: Not Supported 00:30:30.503 LBA Status Info Alert Notices: Not Supported 00:30:30.503 EGE Aggregate Log Change Notices: Not Supported 00:30:30.503 Normal NVM Subsystem Shutdown event: Not Supported 00:30:30.503 Zone Descriptor Change Notices: Not Supported 00:30:30.503 Discovery Log Change Notices: Not Supported 00:30:30.503 Controller Attributes 00:30:30.503 128-bit Host Identifier: Not Supported 00:30:30.503 Non-Operational Permissive Mode: Not Supported 00:30:30.503 NVM Sets: Not Supported 00:30:30.503 Read Recovery Levels: Not Supported 00:30:30.503 Endurance Groups: Not Supported 00:30:30.503 Predictable Latency Mode: Not Supported 00:30:30.503 Traffic Based Keep ALive: Not Supported 00:30:30.503 Namespace Granularity: Not Supported 00:30:30.503 SQ Associations: Not Supported 00:30:30.503 UUID List: Not Supported 00:30:30.503 Multi-Domain Subsystem: Not Supported 00:30:30.503 Fixed Capacity Management: Not Supported 00:30:30.503 Variable Capacity Management: Not Supported 00:30:30.503 Delete Endurance Group: Not Supported 00:30:30.503 Delete NVM Set: Not Supported 00:30:30.503 Extended LBA Formats Supported: Supported 00:30:30.503 Flexible Data Placement Supported: Not Supported 00:30:30.503 00:30:30.503 Controller Memory Buffer Support 00:30:30.503 ================================ 00:30:30.503 Supported: No 00:30:30.503 00:30:30.503 Persistent Memory Region Support 00:30:30.503 ================================ 00:30:30.503 Supported: No 00:30:30.503 00:30:30.503 Admin Command Set Attributes 00:30:30.503 ============================ 00:30:30.503 Security Send/Receive: Not Supported 00:30:30.503 Format NVM: Supported 00:30:30.503 Firmware Activate/Download: Not Supported 00:30:30.503 Namespace Management: Supported 00:30:30.503 Device Self-Test: Not Supported 00:30:30.503 Directives: Supported 00:30:30.503 NVMe-MI: Not Supported 00:30:30.503 Virtualization Management: Not Supported 00:30:30.503 Doorbell Buffer Config: Supported 00:30:30.503 Get LBA Status Capability: Not Supported 00:30:30.503 Command & Feature Lockdown Capability: Not Supported 00:30:30.503 Abort Command Limit: 4 00:30:30.503 Async Event Request Limit: 4 00:30:30.503 Number of Firmware Slots: N/A 00:30:30.503 Firmware Slot 1 Read-Only: N/A 00:30:30.503 Firmware Activation Without Reset: N/A 00:30:30.503 Multiple Update Detection Support: N/A 00:30:30.503 Firmware Update Granularity: No Information Provided 00:30:30.503 Per-Namespace SMART Log: Yes 00:30:30.504 Asymmetric Namespace Access Log Page: Not Supported 00:30:30.504 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:30.504 Command Effects Log Page: Supported 00:30:30.504 Get Log Page Extended Data: Supported 00:30:30.504 Telemetry Log Pages: Not Supported 00:30:30.504 Persistent Event Log Pages: Not Supported 00:30:30.504 Supported Log Pages Log Page: May Support 00:30:30.504 Commands Supported & Effects Log Page: Not Supported 00:30:30.504 Feature Identifiers & Effects Log Page:May Support 00:30:30.504 NVMe-MI Commands & Effects Log Page: May Support 00:30:30.504 Data Area 4 for Telemetry Log: Not Supported 00:30:30.504 Error Log Page Entries Supported: 1 00:30:30.504 Keep Alive: Not Supported 00:30:30.504 00:30:30.504 NVM Command Set Attributes 00:30:30.504 ========================== 00:30:30.504 Submission Queue Entry Size 00:30:30.504 Max: 64 00:30:30.504 Min: 64 00:30:30.504 Completion Queue Entry Size 00:30:30.504 Max: 16 00:30:30.504 Min: 16 00:30:30.504 Number of Namespaces: 256 00:30:30.504 Compare Command: Supported 00:30:30.504 Write Uncorrectable Command: Not Supported 00:30:30.504 Dataset Management Command: Supported 00:30:30.504 Write Zeroes Command: Supported 00:30:30.504 Set Features Save Field: Supported 00:30:30.504 Reservations: Not Supported 00:30:30.504 Timestamp: Supported 00:30:30.504 Copy: Supported 00:30:30.504 Volatile Write Cache: Present 00:30:30.504 Atomic Write Unit (Normal): 1 00:30:30.504 Atomic Write Unit (PFail): 1 00:30:30.504 Atomic Compare & Write Unit: 1 00:30:30.504 Fused Compare & Write: Not Supported 00:30:30.504 Scatter-Gather List 00:30:30.504 SGL Command Set: Supported 00:30:30.504 SGL Keyed: Not Supported 00:30:30.504 SGL Bit Bucket Descriptor: Not Supported 00:30:30.504 SGL Metadata Pointer: Not Supported 00:30:30.504 Oversized SGL: Not Supported 00:30:30.504 SGL Metadata Address: Not Supported 00:30:30.504 SGL Offset: Not Supported 00:30:30.504 Transport SGL Data Block: Not Supported 00:30:30.504 Replay Protected Memory Block: Not Supported 00:30:30.504 00:30:30.504 Firmware Slot Information 00:30:30.504 ========================= 00:30:30.504 Active slot: 1 00:30:30.504 Slot 1 Firmware Revision: 1.0 00:30:30.504 00:30:30.504 00:30:30.504 Commands Supported and Effects 00:30:30.504 ============================== 00:30:30.504 Admin Commands 00:30:30.504 -------------- 00:30:30.504 Delete I/O Submission Queue (00h): Supported 00:30:30.504 Create I/O Submission Queue (01h): Supported 00:30:30.504 Get Log Page (02h): Supported 00:30:30.504 Delete I/O Completion Queue (04h): Supported 00:30:30.504 Create I/O Completion Queue (05h): Supported 00:30:30.504 Identify (06h): Supported 00:30:30.504 Abort (08h): Supported 00:30:30.504 Set Features (09h): Supported 00:30:30.504 Get Features (0Ah): Supported 00:30:30.504 Asynchronous Event Request (0Ch): Supported 00:30:30.504 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:30.504 Directive Send (19h): Supported 00:30:30.504 Directive Receive (1Ah): Supported 00:30:30.504 Virtualization Management (1Ch): Supported 00:30:30.504 Doorbell Buffer Config (7Ch): Supported 00:30:30.504 Format NVM (80h): Supported LBA-Change 00:30:30.504 I/O Commands 00:30:30.504 ------------ 00:30:30.504 Flush (00h): Supported LBA-Change 00:30:30.504 Write (01h): Supported LBA-Change 00:30:30.504 Read (02h): Supported 00:30:30.504 Compare (05h): Supported 00:30:30.504 Write Zeroes (08h): Supported LBA-Change 00:30:30.504 Dataset Management (09h): Supported LBA-Change 00:30:30.504 Unknown (0Ch): Supported 00:30:30.504 Unknown (12h): Supported 00:30:30.504 Copy (19h): Supported LBA-Change 00:30:30.504 Unknown (1Dh): Supported LBA-Change 00:30:30.504 00:30:30.504 Error Log 00:30:30.504 ========= 00:30:30.504 00:30:30.504 Arbitration 00:30:30.504 =========== 00:30:30.504 Arbitration Burst: no limit 00:30:30.504 00:30:30.504 Power Management 00:30:30.504 ================ 00:30:30.504 Number of Power States: 1 00:30:30.504 Current Power State: Power State #0 00:30:30.504 Power State #0: 00:30:30.504 Max Power: 25.00 W 00:30:30.504 Non-Operational State: Operational 00:30:30.504 Entry Latency: 16 microseconds 00:30:30.504 Exit Latency: 4 microseconds 00:30:30.504 Relative Read Throughput: 0 00:30:30.504 Relative Read Latency: 0 00:30:30.504 Relative Write Throughput: 0 00:30:30.504 Relative Write Latency: 0 00:30:30.504 Idle Power: Not Reported 00:30:30.504 Active Power: Not Reported 00:30:30.504 Non-Operational Permissive Mode: Not Supported 00:30:30.504 00:30:30.504 Health Information 00:30:30.504 ================== 00:30:30.504 Critical Warnings: 00:30:30.504 Available Spare Space: OK 00:30:30.504 Temperature: OK 00:30:30.504 Device Reliability: OK 00:30:30.504 Read Only: No 00:30:30.504 Volatile Memory Backup: OK 00:30:30.504 Current Temperature: 323 Kelvin (50 Celsius) 00:30:30.504 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:30.504 Available Spare: 0% 00:30:30.504 Available Spare Threshold: 0% 00:30:30.504 Life Percentage Used: 0% 00:30:30.504 Data Units Read: 1068 00:30:30.504 Data Units Written: 935 00:30:30.504 Host Read Commands: 57362 00:30:30.504 Host Write Commands: 56156 00:30:30.504 Controller Busy Time: 0 minutes 00:30:30.504 Power Cycles: 0 00:30:30.504 Power On Hours: 0 hours 00:30:30.504 Unsafe Shutdowns: 0 00:30:30.505 Unrecoverable Media Errors: 0 00:30:30.505 Lifetime Error Log Entries: 0 00:30:30.505 Warning Temperature Time: 0 minutes 00:30:30.505 Critical Temperature Time: 0 minutes 00:30:30.505 00:30:30.505 Number of Queues 00:30:30.505 ================ 00:30:30.505 Number of I/O Submission Queues: 64 00:30:30.505 Number of I/O Completion Queues: 64 00:30:30.505 00:30:30.505 ZNS Specific Controller Data 00:30:30.505 ============================ 00:30:30.505 Zone Append Size Limit: 0 00:30:30.505 00:30:30.505 00:30:30.505 Active Namespaces 00:30:30.505 ================= 00:30:30.505 Namespace ID:1 00:30:30.505 Error Recovery Timeout: Unlimited 00:30:30.505 Command Set Identifier: NVM (00h) 00:30:30.505 Deallocate: Supported 00:30:30.505 Deallocated/Unwritten Error: Supported 00:30:30.505 Deallocated Read Value: All 0x00 00:30:30.505 Deallocate in Write Zeroes: Not Supported 00:30:30.505 Deallocated Guard Field: 0xFFFF 00:30:30.505 Flush: Supported 00:30:30.505 Reservation: Not Supported 00:30:30.505 Namespace Sharing Capabilities: Private 00:30:30.505 Size (in LBAs): 1310720 (5GiB) 00:30:30.505 Capacity (in LBAs): 1310720 (5GiB) 00:30:30.505 Utilization (in LBAs): 1310720 (5GiB) 00:30:30.505 Thin Provisioning: Not Supported 00:30:30.505 Per-NS Atomic Units: No 00:30:30.505 Maximum Single Source Range Length: 128 00:30:30.505 Maximum Copy Length: 128 00:30:30.505 Maximum Source Range Count: 128 00:30:30.505 NGUID/EUI64 Never Reused: No 00:30:30.505 Namespace Write Protected: No 00:30:30.505 Number of LBA Formats: 8 00:30:30.505 Current LBA Format: LBA Format #04 00:30:30.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:30.505 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:30.505 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:30.505 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:30.505 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:30.505 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:30.505 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:30.505 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:30.505 00:30:30.505 NVM Specific Namespace Data 00:30:30.505 =========================== 00:30:30.505 Logical Block Storage Tag Mask: 0 00:30:30.505 Protection Information Capabilities: 00:30:30.505 16b Guard Protection Information Storage Tag Support: No 00:30:30.505 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:30.505 Storage Tag Check Read Support: No 00:30:30.505 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.505 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:30.505 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:30:30.765 ===================================================== 00:30:30.765 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:30.765 ===================================================== 00:30:30.765 Controller Capabilities/Features 00:30:30.765 ================================ 00:30:30.765 Vendor ID: 1b36 00:30:30.765 Subsystem Vendor ID: 1af4 00:30:30.765 Serial Number: 12342 00:30:30.765 Model Number: QEMU NVMe Ctrl 00:30:30.765 Firmware Version: 8.0.0 00:30:30.765 Recommended Arb Burst: 6 00:30:30.765 IEEE OUI Identifier: 00 54 52 00:30:30.765 Multi-path I/O 00:30:30.765 May have multiple subsystem ports: No 00:30:30.765 May have multiple controllers: No 00:30:30.765 Associated with SR-IOV VF: No 00:30:30.765 Max Data Transfer Size: 524288 00:30:30.765 Max Number of Namespaces: 256 00:30:30.765 Max Number of I/O Queues: 64 00:30:30.765 NVMe Specification Version (VS): 1.4 00:30:30.765 NVMe Specification Version (Identify): 1.4 00:30:30.765 Maximum Queue Entries: 2048 00:30:30.765 Contiguous Queues Required: Yes 00:30:30.766 Arbitration Mechanisms Supported 00:30:30.766 Weighted Round Robin: Not Supported 00:30:30.766 Vendor Specific: Not Supported 00:30:30.766 Reset Timeout: 7500 ms 00:30:30.766 Doorbell Stride: 4 bytes 00:30:30.766 NVM Subsystem Reset: Not Supported 00:30:30.766 Command Sets Supported 00:30:30.766 NVM Command Set: Supported 00:30:30.766 Boot Partition: Not Supported 00:30:30.766 Memory Page Size Minimum: 4096 bytes 00:30:30.766 Memory Page Size Maximum: 65536 bytes 00:30:30.766 Persistent Memory Region: Not Supported 00:30:30.766 Optional Asynchronous Events Supported 00:30:30.766 Namespace Attribute Notices: Supported 00:30:30.766 Firmware Activation Notices: Not Supported 00:30:30.766 ANA Change Notices: Not Supported 00:30:30.766 PLE Aggregate Log Change Notices: Not Supported 00:30:30.766 LBA Status Info Alert Notices: Not Supported 00:30:30.766 EGE Aggregate Log Change Notices: Not Supported 00:30:30.766 Normal NVM Subsystem Shutdown event: Not Supported 00:30:30.766 Zone Descriptor Change Notices: Not Supported 00:30:30.766 Discovery Log Change Notices: Not Supported 00:30:30.766 Controller Attributes 00:30:30.766 128-bit Host Identifier: Not Supported 00:30:30.766 Non-Operational Permissive Mode: Not Supported 00:30:30.766 NVM Sets: Not Supported 00:30:30.766 Read Recovery Levels: Not Supported 00:30:30.766 Endurance Groups: Not Supported 00:30:30.766 Predictable Latency Mode: Not Supported 00:30:30.766 Traffic Based Keep ALive: Not Supported 00:30:30.766 Namespace Granularity: Not Supported 00:30:30.766 SQ Associations: Not Supported 00:30:30.766 UUID List: Not Supported 00:30:30.766 Multi-Domain Subsystem: Not Supported 00:30:30.766 Fixed Capacity Management: Not Supported 00:30:30.766 Variable Capacity Management: Not Supported 00:30:30.766 Delete Endurance Group: Not Supported 00:30:30.766 Delete NVM Set: Not Supported 00:30:30.766 Extended LBA Formats Supported: Supported 00:30:30.766 Flexible Data Placement Supported: Not Supported 00:30:30.766 00:30:30.766 Controller Memory Buffer Support 00:30:30.766 ================================ 00:30:30.766 Supported: No 00:30:30.766 00:30:30.766 Persistent Memory Region Support 00:30:30.766 ================================ 00:30:30.766 Supported: No 00:30:30.766 00:30:30.766 Admin Command Set Attributes 00:30:30.766 ============================ 00:30:30.766 Security Send/Receive: Not Supported 00:30:30.766 Format NVM: Supported 00:30:30.766 Firmware Activate/Download: Not Supported 00:30:30.766 Namespace Management: Supported 00:30:30.766 Device Self-Test: Not Supported 00:30:30.766 Directives: Supported 00:30:30.766 NVMe-MI: Not Supported 00:30:30.766 Virtualization Management: Not Supported 00:30:30.766 Doorbell Buffer Config: Supported 00:30:30.766 Get LBA Status Capability: Not Supported 00:30:30.766 Command & Feature Lockdown Capability: Not Supported 00:30:30.766 Abort Command Limit: 4 00:30:30.766 Async Event Request Limit: 4 00:30:30.766 Number of Firmware Slots: N/A 00:30:30.766 Firmware Slot 1 Read-Only: N/A 00:30:30.766 Firmware Activation Without Reset: N/A 00:30:30.766 Multiple Update Detection Support: N/A 00:30:30.766 Firmware Update Granularity: No Information Provided 00:30:30.766 Per-Namespace SMART Log: Yes 00:30:30.766 Asymmetric Namespace Access Log Page: Not Supported 00:30:30.766 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:30.766 Command Effects Log Page: Supported 00:30:30.766 Get Log Page Extended Data: Supported 00:30:30.766 Telemetry Log Pages: Not Supported 00:30:30.766 Persistent Event Log Pages: Not Supported 00:30:30.766 Supported Log Pages Log Page: May Support 00:30:30.766 Commands Supported & Effects Log Page: Not Supported 00:30:30.766 Feature Identifiers & Effects Log Page:May Support 00:30:30.766 NVMe-MI Commands & Effects Log Page: May Support 00:30:30.766 Data Area 4 for Telemetry Log: Not Supported 00:30:30.766 Error Log Page Entries Supported: 1 00:30:30.766 Keep Alive: Not Supported 00:30:30.766 00:30:30.766 NVM Command Set Attributes 00:30:30.766 ========================== 00:30:30.766 Submission Queue Entry Size 00:30:30.766 Max: 64 00:30:30.766 Min: 64 00:30:30.766 Completion Queue Entry Size 00:30:30.766 Max: 16 00:30:30.766 Min: 16 00:30:30.766 Number of Namespaces: 256 00:30:30.766 Compare Command: Supported 00:30:30.766 Write Uncorrectable Command: Not Supported 00:30:30.766 Dataset Management Command: Supported 00:30:30.766 Write Zeroes Command: Supported 00:30:30.766 Set Features Save Field: Supported 00:30:30.766 Reservations: Not Supported 00:30:30.766 Timestamp: Supported 00:30:30.766 Copy: Supported 00:30:30.766 Volatile Write Cache: Present 00:30:30.766 Atomic Write Unit (Normal): 1 00:30:30.766 Atomic Write Unit (PFail): 1 00:30:30.766 Atomic Compare & Write Unit: 1 00:30:30.766 Fused Compare & Write: Not Supported 00:30:30.766 Scatter-Gather List 00:30:30.766 SGL Command Set: Supported 00:30:30.766 SGL Keyed: Not Supported 00:30:30.766 SGL Bit Bucket Descriptor: Not Supported 00:30:30.766 SGL Metadata Pointer: Not Supported 00:30:30.766 Oversized SGL: Not Supported 00:30:30.766 SGL Metadata Address: Not Supported 00:30:30.766 SGL Offset: Not Supported 00:30:30.766 Transport SGL Data Block: Not Supported 00:30:30.766 Replay Protected Memory Block: Not Supported 00:30:30.766 00:30:30.767 Firmware Slot Information 00:30:30.767 ========================= 00:30:30.767 Active slot: 1 00:30:30.767 Slot 1 Firmware Revision: 1.0 00:30:30.767 00:30:30.767 00:30:30.767 Commands Supported and Effects 00:30:30.767 ============================== 00:30:30.767 Admin Commands 00:30:30.767 -------------- 00:30:30.767 Delete I/O Submission Queue (00h): Supported 00:30:30.767 Create I/O Submission Queue (01h): Supported 00:30:30.767 Get Log Page (02h): Supported 00:30:30.767 Delete I/O Completion Queue (04h): Supported 00:30:30.767 Create I/O Completion Queue (05h): Supported 00:30:30.767 Identify (06h): Supported 00:30:30.767 Abort (08h): Supported 00:30:30.767 Set Features (09h): Supported 00:30:30.767 Get Features (0Ah): Supported 00:30:30.767 Asynchronous Event Request (0Ch): Supported 00:30:30.767 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:30.767 Directive Send (19h): Supported 00:30:30.767 Directive Receive (1Ah): Supported 00:30:30.767 Virtualization Management (1Ch): Supported 00:30:30.767 Doorbell Buffer Config (7Ch): Supported 00:30:30.767 Format NVM (80h): Supported LBA-Change 00:30:30.767 I/O Commands 00:30:30.767 ------------ 00:30:30.767 Flush (00h): Supported LBA-Change 00:30:30.767 Write (01h): Supported LBA-Change 00:30:30.767 Read (02h): Supported 00:30:30.767 Compare (05h): Supported 00:30:30.767 Write Zeroes (08h): Supported LBA-Change 00:30:30.767 Dataset Management (09h): Supported LBA-Change 00:30:30.767 Unknown (0Ch): Supported 00:30:30.767 Unknown (12h): Supported 00:30:30.767 Copy (19h): Supported LBA-Change 00:30:30.767 Unknown (1Dh): Supported LBA-Change 00:30:30.767 00:30:30.767 Error Log 00:30:30.767 ========= 00:30:30.767 00:30:30.767 Arbitration 00:30:30.767 =========== 00:30:30.767 Arbitration Burst: no limit 00:30:30.767 00:30:30.767 Power Management 00:30:30.767 ================ 00:30:30.767 Number of Power States: 1 00:30:30.767 Current Power State: Power State #0 00:30:30.767 Power State #0: 00:30:30.767 Max Power: 25.00 W 00:30:30.767 Non-Operational State: Operational 00:30:30.767 Entry Latency: 16 microseconds 00:30:30.767 Exit Latency: 4 microseconds 00:30:30.767 Relative Read Throughput: 0 00:30:30.767 Relative Read Latency: 0 00:30:30.767 Relative Write Throughput: 0 00:30:30.767 Relative Write Latency: 0 00:30:30.767 Idle Power: Not Reported 00:30:30.767 Active Power: Not Reported 00:30:30.767 Non-Operational Permissive Mode: Not Supported 00:30:30.767 00:30:30.767 Health Information 00:30:30.767 ================== 00:30:30.767 Critical Warnings: 00:30:30.767 Available Spare Space: OK 00:30:30.767 Temperature: OK 00:30:30.767 Device Reliability: OK 00:30:30.767 Read Only: No 00:30:30.767 Volatile Memory Backup: OK 00:30:30.767 Current Temperature: 323 Kelvin (50 Celsius) 00:30:30.767 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:30.767 Available Spare: 0% 00:30:30.767 Available Spare Threshold: 0% 00:30:30.767 Life Percentage Used: 0% 00:30:30.767 Data Units Read: 2293 00:30:30.767 Data Units Written: 2080 00:30:30.767 Host Read Commands: 119314 00:30:30.767 Host Write Commands: 117584 00:30:30.767 Controller Busy Time: 0 minutes 00:30:30.767 Power Cycles: 0 00:30:30.767 Power On Hours: 0 hours 00:30:30.767 Unsafe Shutdowns: 0 00:30:30.767 Unrecoverable Media Errors: 0 00:30:30.767 Lifetime Error Log Entries: 0 00:30:30.767 Warning Temperature Time: 0 minutes 00:30:30.767 Critical Temperature Time: 0 minutes 00:30:30.767 00:30:30.767 Number of Queues 00:30:30.767 ================ 00:30:30.767 Number of I/O Submission Queues: 64 00:30:30.767 Number of I/O Completion Queues: 64 00:30:30.767 00:30:30.767 ZNS Specific Controller Data 00:30:30.767 ============================ 00:30:30.767 Zone Append Size Limit: 0 00:30:30.767 00:30:30.767 00:30:30.767 Active Namespaces 00:30:30.767 ================= 00:30:30.767 Namespace ID:1 00:30:30.767 Error Recovery Timeout: Unlimited 00:30:30.767 Command Set Identifier: NVM (00h) 00:30:30.767 Deallocate: Supported 00:30:30.767 Deallocated/Unwritten Error: Supported 00:30:30.767 Deallocated Read Value: All 0x00 00:30:30.767 Deallocate in Write Zeroes: Not Supported 00:30:30.767 Deallocated Guard Field: 0xFFFF 00:30:30.767 Flush: Supported 00:30:30.767 Reservation: Not Supported 00:30:30.767 Namespace Sharing Capabilities: Private 00:30:30.767 Size (in LBAs): 1048576 (4GiB) 00:30:30.767 Capacity (in LBAs): 1048576 (4GiB) 00:30:30.767 Utilization (in LBAs): 1048576 (4GiB) 00:30:30.767 Thin Provisioning: Not Supported 00:30:30.767 Per-NS Atomic Units: No 00:30:30.767 Maximum Single Source Range Length: 128 00:30:30.767 Maximum Copy Length: 128 00:30:30.767 Maximum Source Range Count: 128 00:30:30.767 NGUID/EUI64 Never Reused: No 00:30:30.767 Namespace Write Protected: No 00:30:30.767 Number of LBA Formats: 8 00:30:30.767 Current LBA Format: LBA Format #04 00:30:30.767 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:30.767 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:30.767 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:30.767 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:30.767 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:30.767 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:30.767 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:30.767 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:30.767 00:30:30.768 NVM Specific Namespace Data 00:30:30.768 =========================== 00:30:30.768 Logical Block Storage Tag Mask: 0 00:30:30.768 Protection Information Capabilities: 00:30:30.768 16b Guard Protection Information Storage Tag Support: No 00:30:30.768 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:30.768 Storage Tag Check Read Support: No 00:30:30.768 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Namespace ID:2 00:30:30.768 Error Recovery Timeout: Unlimited 00:30:30.768 Command Set Identifier: NVM (00h) 00:30:30.768 Deallocate: Supported 00:30:30.768 Deallocated/Unwritten Error: Supported 00:30:30.768 Deallocated Read Value: All 0x00 00:30:30.768 Deallocate in Write Zeroes: Not Supported 00:30:30.768 Deallocated Guard Field: 0xFFFF 00:30:30.768 Flush: Supported 00:30:30.768 Reservation: Not Supported 00:30:30.768 Namespace Sharing Capabilities: Private 00:30:30.768 Size (in LBAs): 1048576 (4GiB) 00:30:30.768 Capacity (in LBAs): 1048576 (4GiB) 00:30:30.768 Utilization (in LBAs): 1048576 (4GiB) 00:30:30.768 Thin Provisioning: Not Supported 00:30:30.768 Per-NS Atomic Units: No 00:30:30.768 Maximum Single Source Range Length: 128 00:30:30.768 Maximum Copy Length: 128 00:30:30.768 Maximum Source Range Count: 128 00:30:30.768 NGUID/EUI64 Never Reused: No 00:30:30.768 Namespace Write Protected: No 00:30:30.768 Number of LBA Formats: 8 00:30:30.768 Current LBA Format: LBA Format #04 00:30:30.768 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:30.768 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:30.768 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:30.768 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:30.768 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:30.768 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:30.768 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:30.768 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:30.768 00:30:30.768 NVM Specific Namespace Data 00:30:30.768 =========================== 00:30:30.768 Logical Block Storage Tag Mask: 0 00:30:30.768 Protection Information Capabilities: 00:30:30.768 16b Guard Protection Information Storage Tag Support: No 00:30:30.768 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:30.768 Storage Tag Check Read Support: No 00:30:30.768 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Namespace ID:3 00:30:30.768 Error Recovery Timeout: Unlimited 00:30:30.768 Command Set Identifier: NVM (00h) 00:30:30.768 Deallocate: Supported 00:30:30.768 Deallocated/Unwritten Error: Supported 00:30:30.768 Deallocated Read Value: All 0x00 00:30:30.768 Deallocate in Write Zeroes: Not Supported 00:30:30.768 Deallocated Guard Field: 0xFFFF 00:30:30.768 Flush: Supported 00:30:30.768 Reservation: Not Supported 00:30:30.768 Namespace Sharing Capabilities: Private 00:30:30.768 Size (in LBAs): 1048576 (4GiB) 00:30:30.768 Capacity (in LBAs): 1048576 (4GiB) 00:30:30.768 Utilization (in LBAs): 1048576 (4GiB) 00:30:30.768 Thin Provisioning: Not Supported 00:30:30.768 Per-NS Atomic Units: No 00:30:30.768 Maximum Single Source Range Length: 128 00:30:30.768 Maximum Copy Length: 128 00:30:30.768 Maximum Source Range Count: 128 00:30:30.768 NGUID/EUI64 Never Reused: No 00:30:30.768 Namespace Write Protected: No 00:30:30.768 Number of LBA Formats: 8 00:30:30.768 Current LBA Format: LBA Format #04 00:30:30.768 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:30.768 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:30.768 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:30.768 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:30.768 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:30.768 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:30.768 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:30.768 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:30.768 00:30:30.768 NVM Specific Namespace Data 00:30:30.768 =========================== 00:30:30.768 Logical Block Storage Tag Mask: 0 00:30:30.768 Protection Information Capabilities: 00:30:30.768 16b Guard Protection Information Storage Tag Support: No 00:30:30.768 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:30.768 Storage Tag Check Read Support: No 00:30:30.768 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:30.768 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:30.768 15:55:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:30:31.026 ===================================================== 00:30:31.027 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:31.027 ===================================================== 00:30:31.027 Controller Capabilities/Features 00:30:31.027 ================================ 00:30:31.027 Vendor ID: 1b36 00:30:31.027 Subsystem Vendor ID: 1af4 00:30:31.027 Serial Number: 12343 00:30:31.027 Model Number: QEMU NVMe Ctrl 00:30:31.027 Firmware Version: 8.0.0 00:30:31.027 Recommended Arb Burst: 6 00:30:31.027 IEEE OUI Identifier: 00 54 52 00:30:31.027 Multi-path I/O 00:30:31.027 May have multiple subsystem ports: No 00:30:31.027 May have multiple controllers: Yes 00:30:31.027 Associated with SR-IOV VF: No 00:30:31.027 Max Data Transfer Size: 524288 00:30:31.027 Max Number of Namespaces: 256 00:30:31.027 Max Number of I/O Queues: 64 00:30:31.027 NVMe Specification Version (VS): 1.4 00:30:31.027 NVMe Specification Version (Identify): 1.4 00:30:31.027 Maximum Queue Entries: 2048 00:30:31.027 Contiguous Queues Required: Yes 00:30:31.027 Arbitration Mechanisms Supported 00:30:31.027 Weighted Round Robin: Not Supported 00:30:31.027 Vendor Specific: Not Supported 00:30:31.027 Reset Timeout: 7500 ms 00:30:31.027 Doorbell Stride: 4 bytes 00:30:31.027 NVM Subsystem Reset: Not Supported 00:30:31.027 Command Sets Supported 00:30:31.027 NVM Command Set: Supported 00:30:31.027 Boot Partition: Not Supported 00:30:31.027 Memory Page Size Minimum: 4096 bytes 00:30:31.027 Memory Page Size Maximum: 65536 bytes 00:30:31.027 Persistent Memory Region: Not Supported 00:30:31.027 Optional Asynchronous Events Supported 00:30:31.027 Namespace Attribute Notices: Supported 00:30:31.027 Firmware Activation Notices: Not Supported 00:30:31.027 ANA Change Notices: Not Supported 00:30:31.027 PLE Aggregate Log Change Notices: Not Supported 00:30:31.027 LBA Status Info Alert Notices: Not Supported 00:30:31.027 EGE Aggregate Log Change Notices: Not Supported 00:30:31.027 Normal NVM Subsystem Shutdown event: Not Supported 00:30:31.027 Zone Descriptor Change Notices: Not Supported 00:30:31.027 Discovery Log Change Notices: Not Supported 00:30:31.027 Controller Attributes 00:30:31.027 128-bit Host Identifier: Not Supported 00:30:31.027 Non-Operational Permissive Mode: Not Supported 00:30:31.027 NVM Sets: Not Supported 00:30:31.027 Read Recovery Levels: Not Supported 00:30:31.027 Endurance Groups: Supported 00:30:31.027 Predictable Latency Mode: Not Supported 00:30:31.027 Traffic Based Keep ALive: Not Supported 00:30:31.027 Namespace Granularity: Not Supported 00:30:31.027 SQ Associations: Not Supported 00:30:31.027 UUID List: Not Supported 00:30:31.027 Multi-Domain Subsystem: Not Supported 00:30:31.027 Fixed Capacity Management: Not Supported 00:30:31.027 Variable Capacity Management: Not Supported 00:30:31.027 Delete Endurance Group: Not Supported 00:30:31.027 Delete NVM Set: Not Supported 00:30:31.027 Extended LBA Formats Supported: Supported 00:30:31.027 Flexible Data Placement Supported: Supported 00:30:31.027 00:30:31.027 Controller Memory Buffer Support 00:30:31.027 ================================ 00:30:31.027 Supported: No 00:30:31.027 00:30:31.027 Persistent Memory Region Support 00:30:31.027 ================================ 00:30:31.027 Supported: No 00:30:31.027 00:30:31.027 Admin Command Set Attributes 00:30:31.027 ============================ 00:30:31.027 Security Send/Receive: Not Supported 00:30:31.027 Format NVM: Supported 00:30:31.027 Firmware Activate/Download: Not Supported 00:30:31.027 Namespace Management: Supported 00:30:31.027 Device Self-Test: Not Supported 00:30:31.027 Directives: Supported 00:30:31.027 NVMe-MI: Not Supported 00:30:31.027 Virtualization Management: Not Supported 00:30:31.027 Doorbell Buffer Config: Supported 00:30:31.027 Get LBA Status Capability: Not Supported 00:30:31.027 Command & Feature Lockdown Capability: Not Supported 00:30:31.027 Abort Command Limit: 4 00:30:31.027 Async Event Request Limit: 4 00:30:31.027 Number of Firmware Slots: N/A 00:30:31.027 Firmware Slot 1 Read-Only: N/A 00:30:31.027 Firmware Activation Without Reset: N/A 00:30:31.027 Multiple Update Detection Support: N/A 00:30:31.027 Firmware Update Granularity: No Information Provided 00:30:31.027 Per-Namespace SMART Log: Yes 00:30:31.027 Asymmetric Namespace Access Log Page: Not Supported 00:30:31.027 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:31.027 Command Effects Log Page: Supported 00:30:31.027 Get Log Page Extended Data: Supported 00:30:31.027 Telemetry Log Pages: Not Supported 00:30:31.027 Persistent Event Log Pages: Not Supported 00:30:31.027 Supported Log Pages Log Page: May Support 00:30:31.027 Commands Supported & Effects Log Page: Not Supported 00:30:31.027 Feature Identifiers & Effects Log Page:May Support 00:30:31.027 NVMe-MI Commands & Effects Log Page: May Support 00:30:31.027 Data Area 4 for Telemetry Log: Not Supported 00:30:31.027 Error Log Page Entries Supported: 1 00:30:31.027 Keep Alive: Not Supported 00:30:31.027 00:30:31.027 NVM Command Set Attributes 00:30:31.027 ========================== 00:30:31.027 Submission Queue Entry Size 00:30:31.027 Max: 64 00:30:31.027 Min: 64 00:30:31.027 Completion Queue Entry Size 00:30:31.027 Max: 16 00:30:31.027 Min: 16 00:30:31.027 Number of Namespaces: 256 00:30:31.027 Compare Command: Supported 00:30:31.027 Write Uncorrectable Command: Not Supported 00:30:31.027 Dataset Management Command: Supported 00:30:31.027 Write Zeroes Command: Supported 00:30:31.027 Set Features Save Field: Supported 00:30:31.027 Reservations: Not Supported 00:30:31.027 Timestamp: Supported 00:30:31.027 Copy: Supported 00:30:31.028 Volatile Write Cache: Present 00:30:31.028 Atomic Write Unit (Normal): 1 00:30:31.028 Atomic Write Unit (PFail): 1 00:30:31.028 Atomic Compare & Write Unit: 1 00:30:31.028 Fused Compare & Write: Not Supported 00:30:31.028 Scatter-Gather List 00:30:31.028 SGL Command Set: Supported 00:30:31.028 SGL Keyed: Not Supported 00:30:31.028 SGL Bit Bucket Descriptor: Not Supported 00:30:31.028 SGL Metadata Pointer: Not Supported 00:30:31.028 Oversized SGL: Not Supported 00:30:31.028 SGL Metadata Address: Not Supported 00:30:31.028 SGL Offset: Not Supported 00:30:31.028 Transport SGL Data Block: Not Supported 00:30:31.028 Replay Protected Memory Block: Not Supported 00:30:31.028 00:30:31.028 Firmware Slot Information 00:30:31.028 ========================= 00:30:31.028 Active slot: 1 00:30:31.028 Slot 1 Firmware Revision: 1.0 00:30:31.028 00:30:31.028 00:30:31.028 Commands Supported and Effects 00:30:31.028 ============================== 00:30:31.028 Admin Commands 00:30:31.028 -------------- 00:30:31.028 Delete I/O Submission Queue (00h): Supported 00:30:31.028 Create I/O Submission Queue (01h): Supported 00:30:31.028 Get Log Page (02h): Supported 00:30:31.028 Delete I/O Completion Queue (04h): Supported 00:30:31.028 Create I/O Completion Queue (05h): Supported 00:30:31.028 Identify (06h): Supported 00:30:31.028 Abort (08h): Supported 00:30:31.028 Set Features (09h): Supported 00:30:31.028 Get Features (0Ah): Supported 00:30:31.028 Asynchronous Event Request (0Ch): Supported 00:30:31.028 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:31.028 Directive Send (19h): Supported 00:30:31.028 Directive Receive (1Ah): Supported 00:30:31.028 Virtualization Management (1Ch): Supported 00:30:31.028 Doorbell Buffer Config (7Ch): Supported 00:30:31.028 Format NVM (80h): Supported LBA-Change 00:30:31.028 I/O Commands 00:30:31.028 ------------ 00:30:31.028 Flush (00h): Supported LBA-Change 00:30:31.028 Write (01h): Supported LBA-Change 00:30:31.028 Read (02h): Supported 00:30:31.028 Compare (05h): Supported 00:30:31.028 Write Zeroes (08h): Supported LBA-Change 00:30:31.028 Dataset Management (09h): Supported LBA-Change 00:30:31.028 Unknown (0Ch): Supported 00:30:31.028 Unknown (12h): Supported 00:30:31.028 Copy (19h): Supported LBA-Change 00:30:31.028 Unknown (1Dh): Supported LBA-Change 00:30:31.028 00:30:31.028 Error Log 00:30:31.028 ========= 00:30:31.028 00:30:31.028 Arbitration 00:30:31.028 =========== 00:30:31.028 Arbitration Burst: no limit 00:30:31.028 00:30:31.028 Power Management 00:30:31.028 ================ 00:30:31.028 Number of Power States: 1 00:30:31.028 Current Power State: Power State #0 00:30:31.028 Power State #0: 00:30:31.028 Max Power: 25.00 W 00:30:31.028 Non-Operational State: Operational 00:30:31.028 Entry Latency: 16 microseconds 00:30:31.028 Exit Latency: 4 microseconds 00:30:31.028 Relative Read Throughput: 0 00:30:31.028 Relative Read Latency: 0 00:30:31.028 Relative Write Throughput: 0 00:30:31.028 Relative Write Latency: 0 00:30:31.028 Idle Power: Not Reported 00:30:31.028 Active Power: Not Reported 00:30:31.028 Non-Operational Permissive Mode: Not Supported 00:30:31.028 00:30:31.028 Health Information 00:30:31.028 ================== 00:30:31.028 Critical Warnings: 00:30:31.028 Available Spare Space: OK 00:30:31.028 Temperature: OK 00:30:31.028 Device Reliability: OK 00:30:31.028 Read Only: No 00:30:31.028 Volatile Memory Backup: OK 00:30:31.028 Current Temperature: 323 Kelvin (50 Celsius) 00:30:31.028 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:31.028 Available Spare: 0% 00:30:31.028 Available Spare Threshold: 0% 00:30:31.028 Life Percentage Used: 0% 00:30:31.028 Data Units Read: 937 00:30:31.028 Data Units Written: 866 00:30:31.028 Host Read Commands: 41176 00:30:31.028 Host Write Commands: 40599 00:30:31.028 Controller Busy Time: 0 minutes 00:30:31.028 Power Cycles: 0 00:30:31.028 Power On Hours: 0 hours 00:30:31.028 Unsafe Shutdowns: 0 00:30:31.028 Unrecoverable Media Errors: 0 00:30:31.028 Lifetime Error Log Entries: 0 00:30:31.028 Warning Temperature Time: 0 minutes 00:30:31.028 Critical Temperature Time: 0 minutes 00:30:31.028 00:30:31.028 Number of Queues 00:30:31.028 ================ 00:30:31.028 Number of I/O Submission Queues: 64 00:30:31.028 Number of I/O Completion Queues: 64 00:30:31.028 00:30:31.028 ZNS Specific Controller Data 00:30:31.028 ============================ 00:30:31.028 Zone Append Size Limit: 0 00:30:31.028 00:30:31.028 00:30:31.028 Active Namespaces 00:30:31.028 ================= 00:30:31.028 Namespace ID:1 00:30:31.028 Error Recovery Timeout: Unlimited 00:30:31.028 Command Set Identifier: NVM (00h) 00:30:31.028 Deallocate: Supported 00:30:31.028 Deallocated/Unwritten Error: Supported 00:30:31.028 Deallocated Read Value: All 0x00 00:30:31.028 Deallocate in Write Zeroes: Not Supported 00:30:31.028 Deallocated Guard Field: 0xFFFF 00:30:31.028 Flush: Supported 00:30:31.028 Reservation: Not Supported 00:30:31.028 Namespace Sharing Capabilities: Multiple Controllers 00:30:31.028 Size (in LBAs): 262144 (1GiB) 00:30:31.028 Capacity (in LBAs): 262144 (1GiB) 00:30:31.028 Utilization (in LBAs): 262144 (1GiB) 00:30:31.028 Thin Provisioning: Not Supported 00:30:31.028 Per-NS Atomic Units: No 00:30:31.028 Maximum Single Source Range Length: 128 00:30:31.028 Maximum Copy Length: 128 00:30:31.028 Maximum Source Range Count: 128 00:30:31.028 NGUID/EUI64 Never Reused: No 00:30:31.028 Namespace Write Protected: No 00:30:31.028 Endurance group ID: 1 00:30:31.028 Number of LBA Formats: 8 00:30:31.028 Current LBA Format: LBA Format #04 00:30:31.028 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:31.028 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:31.028 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:31.028 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:31.028 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:31.028 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:31.028 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:31.028 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:31.028 00:30:31.028 Get Feature FDP: 00:30:31.028 ================ 00:30:31.028 Enabled: Yes 00:30:31.028 FDP configuration index: 0 00:30:31.028 00:30:31.028 FDP configurations log page 00:30:31.028 =========================== 00:30:31.028 Number of FDP configurations: 1 00:30:31.028 Version: 0 00:30:31.028 Size: 112 00:30:31.028 FDP Configuration Descriptor: 0 00:30:31.028 Descriptor Size: 96 00:30:31.028 Reclaim Group Identifier format: 2 00:30:31.028 FDP Volatile Write Cache: Not Present 00:30:31.028 FDP Configuration: Valid 00:30:31.028 Vendor Specific Size: 0 00:30:31.028 Number of Reclaim Groups: 2 00:30:31.028 Number of Recalim Unit Handles: 8 00:30:31.028 Max Placement Identifiers: 128 00:30:31.028 Number of Namespaces Suppprted: 256 00:30:31.028 Reclaim unit Nominal Size: 6000000 bytes 00:30:31.028 Estimated Reclaim Unit Time Limit: Not Reported 00:30:31.028 RUH Desc #000: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #001: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #002: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #003: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #004: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #005: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #006: RUH Type: Initially Isolated 00:30:31.028 RUH Desc #007: RUH Type: Initially Isolated 00:30:31.028 00:30:31.028 FDP reclaim unit handle usage log page 00:30:31.028 ====================================== 00:30:31.028 Number of Reclaim Unit Handles: 8 00:30:31.028 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:31.028 RUH Usage Desc #001: RUH Attributes: Unused 00:30:31.028 RUH Usage Desc #002: RUH Attributes: Unused 00:30:31.028 RUH Usage Desc #003: RUH Attributes: Unused 00:30:31.029 RUH Usage Desc #004: RUH Attributes: Unused 00:30:31.029 RUH Usage Desc #005: RUH Attributes: Unused 00:30:31.029 RUH Usage Desc #006: RUH Attributes: Unused 00:30:31.029 RUH Usage Desc #007: RUH Attributes: Unused 00:30:31.029 00:30:31.029 FDP statistics log page 00:30:31.029 ======================= 00:30:31.029 Host bytes with metadata written: 527736832 00:30:31.029 Media bytes with metadata written: 527794176 00:30:31.029 Media bytes erased: 0 00:30:31.029 00:30:31.029 FDP events log page 00:30:31.029 =================== 00:30:31.029 Number of FDP events: 0 00:30:31.029 00:30:31.029 NVM Specific Namespace Data 00:30:31.029 =========================== 00:30:31.029 Logical Block Storage Tag Mask: 0 00:30:31.029 Protection Information Capabilities: 00:30:31.029 16b Guard Protection Information Storage Tag Support: No 00:30:31.029 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:31.029 Storage Tag Check Read Support: No 00:30:31.029 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:31.029 ************************************ 00:30:31.029 END TEST nvme_identify 00:30:31.029 ************************************ 00:30:31.029 00:30:31.029 real 0m1.207s 00:30:31.029 user 0m0.441s 00:30:31.029 sys 0m0.553s 00:30:31.029 15:55:52 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:31.029 15:55:52 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:30:31.029 15:55:52 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:31.029 15:55:52 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:31.029 15:55:52 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:31.029 15:55:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:31.029 ************************************ 00:30:31.029 START TEST nvme_perf 00:30:31.029 ************************************ 00:30:31.029 15:55:52 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:30:31.029 15:55:52 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:32.404 Initializing NVMe Controllers 00:30:32.404 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:32.404 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:32.404 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:32.404 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:32.404 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:32.404 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:32.404 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:32.404 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:32.404 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:32.404 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:32.404 Initialization complete. Launching workers. 00:30:32.404 ======================================================== 00:30:32.404 Latency(us) 00:30:32.404 Device Information : IOPS MiB/s Average min max 00:30:32.404 PCIE (0000:00:10.0) NSID 1 from core 0: 17956.49 210.43 7139.00 5613.80 30743.95 00:30:32.404 PCIE (0000:00:11.0) NSID 1 from core 0: 17956.49 210.43 7132.11 5635.38 29332.41 00:30:32.404 PCIE (0000:00:13.0) NSID 1 from core 0: 17956.49 210.43 7123.90 5635.07 28325.21 00:30:32.404 PCIE (0000:00:12.0) NSID 1 from core 0: 17956.49 210.43 7115.41 5660.35 26902.92 00:30:32.404 PCIE (0000:00:12.0) NSID 2 from core 0: 17956.49 210.43 7107.22 5682.35 25524.24 00:30:32.404 PCIE (0000:00:12.0) NSID 3 from core 0: 18020.39 211.18 7073.92 5691.60 20930.88 00:30:32.404 ======================================================== 00:30:32.404 Total : 107802.85 1263.31 7115.23 5613.80 30743.95 00:30:32.404 00:30:32.404 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:32.404 ================================================================================= 00:30:32.404 1.00000% : 5847.828us 00:30:32.404 10.00000% : 6074.683us 00:30:32.404 25.00000% : 6301.538us 00:30:32.404 50.00000% : 6654.425us 00:30:32.404 75.00000% : 7158.548us 00:30:32.404 90.00000% : 9023.803us 00:30:32.404 95.00000% : 9779.988us 00:30:32.404 98.00000% : 10687.409us 00:30:32.404 99.00000% : 11594.831us 00:30:32.404 99.50000% : 25508.628us 00:30:32.404 99.90000% : 30449.034us 00:30:32.404 99.99000% : 30852.332us 00:30:32.404 99.99900% : 30852.332us 00:30:32.404 99.99990% : 30852.332us 00:30:32.404 99.99999% : 30852.332us 00:30:32.404 00:30:32.404 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:32.404 ================================================================================= 00:30:32.404 1.00000% : 5898.240us 00:30:32.404 10.00000% : 6150.302us 00:30:32.404 25.00000% : 6326.745us 00:30:32.404 50.00000% : 6654.425us 00:30:32.404 75.00000% : 7108.135us 00:30:32.404 90.00000% : 9023.803us 00:30:32.404 95.00000% : 9830.400us 00:30:32.404 98.00000% : 10636.997us 00:30:32.404 99.00000% : 11796.480us 00:30:32.404 99.50000% : 24399.557us 00:30:32.404 99.90000% : 29037.489us 00:30:32.404 99.99000% : 29440.788us 00:30:32.404 99.99900% : 29440.788us 00:30:32.404 99.99990% : 29440.788us 00:30:32.404 99.99999% : 29440.788us 00:30:32.404 00:30:32.404 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:32.404 ================================================================================= 00:30:32.404 1.00000% : 5898.240us 00:30:32.404 10.00000% : 6125.095us 00:30:32.404 25.00000% : 6326.745us 00:30:32.404 50.00000% : 6654.425us 00:30:32.404 75.00000% : 7057.723us 00:30:32.404 90.00000% : 8922.978us 00:30:32.404 95.00000% : 9981.637us 00:30:32.404 98.00000% : 10636.997us 00:30:32.404 99.00000% : 11494.006us 00:30:32.404 99.50000% : 23693.785us 00:30:32.404 99.90000% : 28029.243us 00:30:32.404 99.99000% : 28432.542us 00:30:32.404 99.99900% : 28432.542us 00:30:32.404 99.99990% : 28432.542us 00:30:32.404 99.99999% : 28432.542us 00:30:32.404 00:30:32.404 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:32.404 ================================================================================= 00:30:32.404 1.00000% : 5898.240us 00:30:32.404 10.00000% : 6125.095us 00:30:32.404 25.00000% : 6326.745us 00:30:32.404 50.00000% : 6654.425us 00:30:32.404 75.00000% : 7057.723us 00:30:32.404 90.00000% : 8922.978us 00:30:32.404 95.00000% : 9880.812us 00:30:32.404 98.00000% : 10788.234us 00:30:32.404 99.00000% : 11443.594us 00:30:32.404 99.50000% : 22282.240us 00:30:32.404 99.90000% : 26617.698us 00:30:32.404 99.99000% : 27020.997us 00:30:32.404 99.99900% : 27020.997us 00:30:32.404 99.99990% : 27020.997us 00:30:32.404 99.99999% : 27020.997us 00:30:32.404 00:30:32.404 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:32.405 ================================================================================= 00:30:32.405 1.00000% : 5898.240us 00:30:32.405 10.00000% : 6125.095us 00:30:32.405 25.00000% : 6326.745us 00:30:32.405 50.00000% : 6654.425us 00:30:32.405 75.00000% : 7108.135us 00:30:32.405 90.00000% : 8922.978us 00:30:32.405 95.00000% : 9880.812us 00:30:32.405 98.00000% : 10838.646us 00:30:32.405 99.00000% : 11645.243us 00:30:32.405 99.50000% : 20870.695us 00:30:32.405 99.90000% : 25206.154us 00:30:32.405 99.99000% : 25508.628us 00:30:32.405 99.99900% : 25609.452us 00:30:32.405 99.99990% : 25609.452us 00:30:32.405 99.99999% : 25609.452us 00:30:32.405 00:30:32.405 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:32.405 ================================================================================= 00:30:32.405 1.00000% : 5898.240us 00:30:32.405 10.00000% : 6150.302us 00:30:32.405 25.00000% : 6326.745us 00:30:32.405 50.00000% : 6654.425us 00:30:32.405 75.00000% : 7108.135us 00:30:32.405 90.00000% : 9023.803us 00:30:32.405 95.00000% : 9779.988us 00:30:32.405 98.00000% : 10737.822us 00:30:32.405 99.00000% : 11494.006us 00:30:32.405 99.50000% : 15728.640us 00:30:32.405 99.90000% : 20568.222us 00:30:32.405 99.99000% : 20971.520us 00:30:32.405 99.99900% : 20971.520us 00:30:32.405 99.99990% : 20971.520us 00:30:32.405 99.99999% : 20971.520us 00:30:32.405 00:30:32.405 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:32.405 ============================================================================== 00:30:32.405 Range in us Cumulative IO count 00:30:32.405 5595.766 - 5620.972: 0.0056% ( 1) 00:30:32.405 5620.972 - 5646.178: 0.0278% ( 4) 00:30:32.405 5646.178 - 5671.385: 0.0834% ( 10) 00:30:32.405 5671.385 - 5696.591: 0.1390% ( 10) 00:30:32.405 5696.591 - 5721.797: 0.2613% ( 22) 00:30:32.405 5721.797 - 5747.003: 0.3892% ( 23) 00:30:32.405 5747.003 - 5772.209: 0.5004% ( 20) 00:30:32.405 5772.209 - 5797.415: 0.6784% ( 32) 00:30:32.405 5797.415 - 5822.622: 0.9342% ( 46) 00:30:32.405 5822.622 - 5847.828: 1.2511% ( 57) 00:30:32.405 5847.828 - 5873.034: 1.7126% ( 83) 00:30:32.405 5873.034 - 5898.240: 2.1519% ( 79) 00:30:32.405 5898.240 - 5923.446: 2.8803% ( 131) 00:30:32.405 5923.446 - 5948.652: 3.7144% ( 150) 00:30:32.405 5948.652 - 5973.858: 4.6875% ( 175) 00:30:32.405 5973.858 - 5999.065: 6.0609% ( 247) 00:30:32.405 5999.065 - 6024.271: 7.3677% ( 235) 00:30:32.405 6024.271 - 6049.477: 8.7800% ( 254) 00:30:32.405 6049.477 - 6074.683: 10.3481% ( 282) 00:30:32.405 6074.683 - 6099.889: 11.9440% ( 287) 00:30:32.405 6099.889 - 6125.095: 13.6566% ( 308) 00:30:32.405 6125.095 - 6150.302: 15.3692% ( 308) 00:30:32.405 6150.302 - 6175.508: 16.8316% ( 263) 00:30:32.405 6175.508 - 6200.714: 18.5165% ( 303) 00:30:32.405 6200.714 - 6225.920: 20.2458% ( 311) 00:30:32.405 6225.920 - 6251.126: 21.9084% ( 299) 00:30:32.405 6251.126 - 6276.332: 23.7489% ( 331) 00:30:32.405 6276.332 - 6301.538: 25.3503% ( 288) 00:30:32.405 6301.538 - 6326.745: 27.1742% ( 328) 00:30:32.405 6326.745 - 6351.951: 28.9202% ( 314) 00:30:32.405 6351.951 - 6377.157: 30.8107% ( 340) 00:30:32.405 6377.157 - 6402.363: 32.7124% ( 342) 00:30:32.405 6402.363 - 6427.569: 34.5251% ( 326) 00:30:32.405 6427.569 - 6452.775: 36.3712% ( 332) 00:30:32.405 6452.775 - 6503.188: 39.9689% ( 647) 00:30:32.405 6503.188 - 6553.600: 43.7111% ( 673) 00:30:32.405 6553.600 - 6604.012: 47.5033% ( 682) 00:30:32.405 6604.012 - 6654.425: 51.2233% ( 669) 00:30:32.405 6654.425 - 6704.837: 54.9266% ( 666) 00:30:32.405 6704.837 - 6755.249: 58.5298% ( 648) 00:30:32.405 6755.249 - 6805.662: 61.8550% ( 598) 00:30:32.405 6805.662 - 6856.074: 65.1524% ( 593) 00:30:32.405 6856.074 - 6906.486: 68.0883% ( 528) 00:30:32.405 6906.486 - 6956.898: 70.4793% ( 430) 00:30:32.405 6956.898 - 7007.311: 72.2642% ( 321) 00:30:32.405 7007.311 - 7057.723: 73.6432% ( 248) 00:30:32.405 7057.723 - 7108.135: 74.7053% ( 191) 00:30:32.405 7108.135 - 7158.548: 75.5616% ( 154) 00:30:32.405 7158.548 - 7208.960: 76.4179% ( 154) 00:30:32.405 7208.960 - 7259.372: 77.2520% ( 150) 00:30:32.405 7259.372 - 7309.785: 77.9971% ( 134) 00:30:32.405 7309.785 - 7360.197: 78.6810% ( 123) 00:30:32.405 7360.197 - 7410.609: 79.2649% ( 105) 00:30:32.405 7410.609 - 7461.022: 79.9044% ( 115) 00:30:32.405 7461.022 - 7511.434: 80.4715% ( 102) 00:30:32.405 7511.434 - 7561.846: 81.0220% ( 99) 00:30:32.405 7561.846 - 7612.258: 81.4780% ( 82) 00:30:32.405 7612.258 - 7662.671: 81.9673% ( 88) 00:30:32.405 7662.671 - 7713.083: 82.4344% ( 84) 00:30:32.405 7713.083 - 7763.495: 82.8681% ( 78) 00:30:32.405 7763.495 - 7813.908: 83.2796% ( 74) 00:30:32.405 7813.908 - 7864.320: 83.6577% ( 68) 00:30:32.405 7864.320 - 7914.732: 84.0136% ( 64) 00:30:32.405 7914.732 - 7965.145: 84.3861% ( 67) 00:30:32.405 7965.145 - 8015.557: 84.7253% ( 61) 00:30:32.405 8015.557 - 8065.969: 85.0478% ( 58) 00:30:32.405 8065.969 - 8116.382: 85.3425% ( 53) 00:30:32.405 8116.382 - 8166.794: 85.5927% ( 45) 00:30:32.405 8166.794 - 8217.206: 85.8430% ( 45) 00:30:32.405 8217.206 - 8267.618: 86.0876% ( 44) 00:30:32.405 8267.618 - 8318.031: 86.4213% ( 60) 00:30:32.405 8318.031 - 8368.443: 86.6937% ( 49) 00:30:32.405 8368.443 - 8418.855: 86.9050% ( 38) 00:30:32.405 8418.855 - 8469.268: 87.1274% ( 40) 00:30:32.405 8469.268 - 8519.680: 87.3221% ( 35) 00:30:32.405 8519.680 - 8570.092: 87.5890% ( 48) 00:30:32.405 8570.092 - 8620.505: 87.8392% ( 45) 00:30:32.405 8620.505 - 8670.917: 88.1395% ( 54) 00:30:32.405 8670.917 - 8721.329: 88.4119% ( 49) 00:30:32.405 8721.329 - 8771.742: 88.7233% ( 56) 00:30:32.405 8771.742 - 8822.154: 89.0180% ( 53) 00:30:32.405 8822.154 - 8872.566: 89.3238% ( 55) 00:30:32.405 8872.566 - 8922.978: 89.5852% ( 47) 00:30:32.405 8922.978 - 8973.391: 89.8465% ( 47) 00:30:32.405 8973.391 - 9023.803: 90.1690% ( 58) 00:30:32.405 9023.803 - 9074.215: 90.4582% ( 52) 00:30:32.405 9074.215 - 9124.628: 90.7585% ( 54) 00:30:32.405 9124.628 - 9175.040: 91.1032% ( 62) 00:30:32.405 9175.040 - 9225.452: 91.4480% ( 62) 00:30:32.405 9225.452 - 9275.865: 91.8149% ( 66) 00:30:32.405 9275.865 - 9326.277: 92.1653% ( 63) 00:30:32.405 9326.277 - 9376.689: 92.5100% ( 62) 00:30:32.405 9376.689 - 9427.102: 92.8826% ( 67) 00:30:32.405 9427.102 - 9477.514: 93.2162% ( 60) 00:30:32.405 9477.514 - 9527.926: 93.6165% ( 72) 00:30:32.405 9527.926 - 9578.338: 93.9947% ( 68) 00:30:32.405 9578.338 - 9628.751: 94.3339% ( 61) 00:30:32.405 9628.751 - 9679.163: 94.6842% ( 63) 00:30:32.405 9679.163 - 9729.575: 94.9399% ( 46) 00:30:32.405 9729.575 - 9779.988: 95.1902% ( 45) 00:30:32.405 9779.988 - 9830.400: 95.4293% ( 43) 00:30:32.405 9830.400 - 9880.812: 95.6406% ( 38) 00:30:32.405 9880.812 - 9931.225: 95.8908% ( 45) 00:30:32.405 9931.225 - 9981.637: 96.1577% ( 48) 00:30:32.405 9981.637 - 10032.049: 96.3634% ( 37) 00:30:32.405 10032.049 - 10082.462: 96.5469% ( 33) 00:30:32.405 10082.462 - 10132.874: 96.7638% ( 39) 00:30:32.405 10132.874 - 10183.286: 96.9473% ( 33) 00:30:32.405 10183.286 - 10233.698: 97.0974% ( 27) 00:30:32.405 10233.698 - 10284.111: 97.2031% ( 19) 00:30:32.405 10284.111 - 10334.523: 97.2920% ( 16) 00:30:32.405 10334.523 - 10384.935: 97.3977% ( 19) 00:30:32.405 10384.935 - 10435.348: 97.4867% ( 16) 00:30:32.405 10435.348 - 10485.760: 97.6312% ( 26) 00:30:32.405 10485.760 - 10536.172: 97.7313% ( 18) 00:30:32.405 10536.172 - 10586.585: 97.8370% ( 19) 00:30:32.405 10586.585 - 10636.997: 97.9371% ( 18) 00:30:32.405 10636.997 - 10687.409: 98.0149% ( 14) 00:30:32.405 10687.409 - 10737.822: 98.0927% ( 14) 00:30:32.405 10737.822 - 10788.234: 98.1762% ( 15) 00:30:32.405 10788.234 - 10838.646: 98.2540% ( 14) 00:30:32.405 10838.646 - 10889.058: 98.3652% ( 20) 00:30:32.405 10889.058 - 10939.471: 98.4375% ( 13) 00:30:32.405 10939.471 - 10989.883: 98.5098% ( 13) 00:30:32.405 10989.883 - 11040.295: 98.5765% ( 12) 00:30:32.405 11040.295 - 11090.708: 98.6377% ( 11) 00:30:32.405 11090.708 - 11141.120: 98.6877% ( 9) 00:30:32.405 11141.120 - 11191.532: 98.7378% ( 9) 00:30:32.405 11191.532 - 11241.945: 98.7878% ( 9) 00:30:32.405 11241.945 - 11292.357: 98.8267% ( 7) 00:30:32.405 11292.357 - 11342.769: 98.8601% ( 6) 00:30:32.405 11342.769 - 11393.182: 98.9046% ( 8) 00:30:32.405 11393.182 - 11443.594: 98.9491% ( 8) 00:30:32.405 11443.594 - 11494.006: 98.9657% ( 3) 00:30:32.405 11494.006 - 11544.418: 98.9880% ( 4) 00:30:32.405 11544.418 - 11594.831: 99.0102% ( 4) 00:30:32.405 11594.831 - 11645.243: 99.0380% ( 5) 00:30:32.405 11645.243 - 11695.655: 99.0603% ( 4) 00:30:32.405 11695.655 - 11746.068: 99.0770% ( 3) 00:30:32.405 11746.068 - 11796.480: 99.0936% ( 3) 00:30:32.405 11796.480 - 11846.892: 99.1214% ( 5) 00:30:32.405 11846.892 - 11897.305: 99.1548% ( 6) 00:30:32.405 11897.305 - 11947.717: 99.1715% ( 3) 00:30:32.405 11947.717 - 11998.129: 99.1937% ( 4) 00:30:32.405 11998.129 - 12048.542: 99.2215% ( 5) 00:30:32.405 12048.542 - 12098.954: 99.2382% ( 3) 00:30:32.405 12098.954 - 12149.366: 99.2438% ( 1) 00:30:32.405 12149.366 - 12199.778: 99.2549% ( 2) 00:30:32.405 12199.778 - 12250.191: 99.2605% ( 1) 00:30:32.405 12250.191 - 12300.603: 99.2716% ( 2) 00:30:32.405 12300.603 - 12351.015: 99.2827% ( 2) 00:30:32.405 12351.015 - 12401.428: 99.2883% ( 1) 00:30:32.405 24601.206 - 24702.031: 99.3105% ( 4) 00:30:32.405 24702.031 - 24802.855: 99.3327% ( 4) 00:30:32.405 24802.855 - 24903.680: 99.3605% ( 5) 00:30:32.405 24903.680 - 25004.505: 99.3883% ( 5) 00:30:32.405 25004.505 - 25105.329: 99.4161% ( 5) 00:30:32.405 25105.329 - 25206.154: 99.4440% ( 5) 00:30:32.405 25206.154 - 25306.978: 99.4718% ( 5) 00:30:32.405 25306.978 - 25407.803: 99.4940% ( 4) 00:30:32.406 25407.803 - 25508.628: 99.5218% ( 5) 00:30:32.406 25508.628 - 25609.452: 99.5496% ( 5) 00:30:32.406 25609.452 - 25710.277: 99.5774% ( 5) 00:30:32.406 25710.277 - 25811.102: 99.5996% ( 4) 00:30:32.406 25811.102 - 26012.751: 99.6441% ( 8) 00:30:32.406 29239.138 - 29440.788: 99.6552% ( 2) 00:30:32.406 29440.788 - 29642.437: 99.7109% ( 10) 00:30:32.406 29642.437 - 29844.086: 99.7609% ( 9) 00:30:32.406 29844.086 - 30045.735: 99.8165% ( 10) 00:30:32.406 30045.735 - 30247.385: 99.8665% ( 9) 00:30:32.406 30247.385 - 30449.034: 99.9222% ( 10) 00:30:32.406 30449.034 - 30650.683: 99.9778% ( 10) 00:30:32.406 30650.683 - 30852.332: 100.0000% ( 4) 00:30:32.406 00:30:32.406 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:32.406 ============================================================================== 00:30:32.406 Range in us Cumulative IO count 00:30:32.406 5620.972 - 5646.178: 0.0111% ( 2) 00:30:32.406 5646.178 - 5671.385: 0.0389% ( 5) 00:30:32.406 5671.385 - 5696.591: 0.0834% ( 8) 00:30:32.406 5696.591 - 5721.797: 0.1446% ( 11) 00:30:32.406 5721.797 - 5747.003: 0.2057% ( 11) 00:30:32.406 5747.003 - 5772.209: 0.2558% ( 9) 00:30:32.406 5772.209 - 5797.415: 0.3225% ( 12) 00:30:32.406 5797.415 - 5822.622: 0.4282% ( 19) 00:30:32.406 5822.622 - 5847.828: 0.5338% ( 19) 00:30:32.406 5847.828 - 5873.034: 0.7673% ( 42) 00:30:32.406 5873.034 - 5898.240: 1.1288% ( 65) 00:30:32.406 5898.240 - 5923.446: 1.5347% ( 73) 00:30:32.406 5923.446 - 5948.652: 1.9962% ( 83) 00:30:32.406 5948.652 - 5973.858: 2.5690% ( 103) 00:30:32.406 5973.858 - 5999.065: 3.2863% ( 129) 00:30:32.406 5999.065 - 6024.271: 4.1926% ( 163) 00:30:32.406 6024.271 - 6049.477: 5.4660% ( 229) 00:30:32.406 6049.477 - 6074.683: 6.7671% ( 234) 00:30:32.406 6074.683 - 6099.889: 8.2573% ( 268) 00:30:32.406 6099.889 - 6125.095: 9.8754% ( 291) 00:30:32.406 6125.095 - 6150.302: 11.6993% ( 328) 00:30:32.406 6150.302 - 6175.508: 13.6621% ( 353) 00:30:32.406 6175.508 - 6200.714: 15.8863% ( 400) 00:30:32.406 6200.714 - 6225.920: 18.0605% ( 391) 00:30:32.406 6225.920 - 6251.126: 20.0456% ( 357) 00:30:32.406 6251.126 - 6276.332: 21.9918% ( 350) 00:30:32.406 6276.332 - 6301.538: 23.9268% ( 348) 00:30:32.406 6301.538 - 6326.745: 25.9230% ( 359) 00:30:32.406 6326.745 - 6351.951: 27.8859% ( 353) 00:30:32.406 6351.951 - 6377.157: 29.9266% ( 367) 00:30:32.406 6377.157 - 6402.363: 31.9895% ( 371) 00:30:32.406 6402.363 - 6427.569: 34.1859% ( 395) 00:30:32.406 6427.569 - 6452.775: 36.2989% ( 380) 00:30:32.406 6452.775 - 6503.188: 40.5638% ( 767) 00:30:32.406 6503.188 - 6553.600: 44.9288% ( 785) 00:30:32.406 6553.600 - 6604.012: 49.2215% ( 772) 00:30:32.406 6604.012 - 6654.425: 53.4364% ( 758) 00:30:32.406 6654.425 - 6704.837: 57.5734% ( 744) 00:30:32.406 6704.837 - 6755.249: 61.3768% ( 684) 00:30:32.406 6755.249 - 6805.662: 64.7965% ( 615) 00:30:32.406 6805.662 - 6856.074: 67.9771% ( 572) 00:30:32.406 6856.074 - 6906.486: 70.4070% ( 437) 00:30:32.406 6906.486 - 6956.898: 72.1586% ( 315) 00:30:32.406 6956.898 - 7007.311: 73.4597% ( 234) 00:30:32.406 7007.311 - 7057.723: 74.4440% ( 177) 00:30:32.406 7057.723 - 7108.135: 75.3336% ( 160) 00:30:32.406 7108.135 - 7158.548: 76.1844% ( 153) 00:30:32.406 7158.548 - 7208.960: 76.9795% ( 143) 00:30:32.406 7208.960 - 7259.372: 77.6579% ( 122) 00:30:32.406 7259.372 - 7309.785: 78.2585% ( 108) 00:30:32.406 7309.785 - 7360.197: 78.7867% ( 95) 00:30:32.406 7360.197 - 7410.609: 79.3316% ( 98) 00:30:32.406 7410.609 - 7461.022: 79.8877% ( 100) 00:30:32.406 7461.022 - 7511.434: 80.4548% ( 102) 00:30:32.406 7511.434 - 7561.846: 80.9275% ( 85) 00:30:32.406 7561.846 - 7612.258: 81.3501% ( 76) 00:30:32.406 7612.258 - 7662.671: 81.7949% ( 80) 00:30:32.406 7662.671 - 7713.083: 82.2398% ( 80) 00:30:32.406 7713.083 - 7763.495: 82.6512% ( 74) 00:30:32.406 7763.495 - 7813.908: 83.0738% ( 76) 00:30:32.406 7813.908 - 7864.320: 83.5743% ( 90) 00:30:32.406 7864.320 - 7914.732: 84.0469% ( 85) 00:30:32.406 7914.732 - 7965.145: 84.4862% ( 79) 00:30:32.406 7965.145 - 8015.557: 84.8643% ( 68) 00:30:32.406 8015.557 - 8065.969: 85.2647% ( 72) 00:30:32.406 8065.969 - 8116.382: 85.6261% ( 65) 00:30:32.406 8116.382 - 8166.794: 85.9820% ( 64) 00:30:32.406 8166.794 - 8217.206: 86.2767% ( 53) 00:30:32.406 8217.206 - 8267.618: 86.5380% ( 47) 00:30:32.406 8267.618 - 8318.031: 86.7716% ( 42) 00:30:32.406 8318.031 - 8368.443: 87.0218% ( 45) 00:30:32.406 8368.443 - 8418.855: 87.2776% ( 46) 00:30:32.406 8418.855 - 8469.268: 87.5389% ( 47) 00:30:32.406 8469.268 - 8519.680: 87.7891% ( 45) 00:30:32.406 8519.680 - 8570.092: 88.0394% ( 45) 00:30:32.406 8570.092 - 8620.505: 88.2896% ( 45) 00:30:32.406 8620.505 - 8670.917: 88.5176% ( 41) 00:30:32.406 8670.917 - 8721.329: 88.7734% ( 46) 00:30:32.406 8721.329 - 8771.742: 89.0236% ( 45) 00:30:32.406 8771.742 - 8822.154: 89.2126% ( 34) 00:30:32.406 8822.154 - 8872.566: 89.4629% ( 45) 00:30:32.406 8872.566 - 8922.978: 89.7131% ( 45) 00:30:32.406 8922.978 - 8973.391: 89.9466% ( 42) 00:30:32.406 8973.391 - 9023.803: 90.2024% ( 46) 00:30:32.406 9023.803 - 9074.215: 90.4415% ( 43) 00:30:32.406 9074.215 - 9124.628: 90.6806% ( 43) 00:30:32.406 9124.628 - 9175.040: 91.0254% ( 62) 00:30:32.406 9175.040 - 9225.452: 91.3256% ( 54) 00:30:32.406 9225.452 - 9275.865: 91.5870% ( 47) 00:30:32.406 9275.865 - 9326.277: 91.8928% ( 55) 00:30:32.406 9326.277 - 9376.689: 92.2487% ( 64) 00:30:32.406 9376.689 - 9427.102: 92.5323% ( 51) 00:30:32.406 9427.102 - 9477.514: 92.8325% ( 54) 00:30:32.406 9477.514 - 9527.926: 93.2218% ( 70) 00:30:32.406 9527.926 - 9578.338: 93.5165% ( 53) 00:30:32.406 9578.338 - 9628.751: 93.7889% ( 49) 00:30:32.406 9628.751 - 9679.163: 94.0781% ( 52) 00:30:32.406 9679.163 - 9729.575: 94.4006% ( 58) 00:30:32.406 9729.575 - 9779.988: 94.7398% ( 61) 00:30:32.406 9779.988 - 9830.400: 95.0623% ( 58) 00:30:32.406 9830.400 - 9880.812: 95.3347% ( 49) 00:30:32.406 9880.812 - 9931.225: 95.6517% ( 57) 00:30:32.406 9931.225 - 9981.637: 95.8964% ( 44) 00:30:32.406 9981.637 - 10032.049: 96.1188% ( 40) 00:30:32.406 10032.049 - 10082.462: 96.3523% ( 42) 00:30:32.406 10082.462 - 10132.874: 96.6081% ( 46) 00:30:32.406 10132.874 - 10183.286: 96.8027% ( 35) 00:30:32.406 10183.286 - 10233.698: 96.9695% ( 30) 00:30:32.406 10233.698 - 10284.111: 97.1419% ( 31) 00:30:32.406 10284.111 - 10334.523: 97.2809% ( 25) 00:30:32.406 10334.523 - 10384.935: 97.4144% ( 24) 00:30:32.406 10384.935 - 10435.348: 97.5478% ( 24) 00:30:32.406 10435.348 - 10485.760: 97.6980% ( 27) 00:30:32.406 10485.760 - 10536.172: 97.8092% ( 20) 00:30:32.406 10536.172 - 10586.585: 97.9259% ( 21) 00:30:32.406 10586.585 - 10636.997: 98.0205% ( 17) 00:30:32.406 10636.997 - 10687.409: 98.1206% ( 18) 00:30:32.406 10687.409 - 10737.822: 98.1984% ( 14) 00:30:32.406 10737.822 - 10788.234: 98.2707% ( 13) 00:30:32.406 10788.234 - 10838.646: 98.3263% ( 10) 00:30:32.406 10838.646 - 10889.058: 98.3930% ( 12) 00:30:32.406 10889.058 - 10939.471: 98.4486% ( 10) 00:30:32.406 10939.471 - 10989.883: 98.5042% ( 10) 00:30:32.406 10989.883 - 11040.295: 98.5487% ( 8) 00:30:32.406 11040.295 - 11090.708: 98.5876% ( 7) 00:30:32.406 11090.708 - 11141.120: 98.6210% ( 6) 00:30:32.406 11141.120 - 11191.532: 98.6544% ( 6) 00:30:32.406 11191.532 - 11241.945: 98.7100% ( 10) 00:30:32.406 11241.945 - 11292.357: 98.7433% ( 6) 00:30:32.406 11292.357 - 11342.769: 98.7711% ( 5) 00:30:32.406 11342.769 - 11393.182: 98.8045% ( 6) 00:30:32.406 11393.182 - 11443.594: 98.8267% ( 4) 00:30:32.406 11443.594 - 11494.006: 98.8545% ( 5) 00:30:32.406 11494.006 - 11544.418: 98.8823% ( 5) 00:30:32.406 11544.418 - 11594.831: 98.9101% ( 5) 00:30:32.406 11594.831 - 11645.243: 98.9379% ( 5) 00:30:32.406 11645.243 - 11695.655: 98.9657% ( 5) 00:30:32.406 11695.655 - 11746.068: 98.9935% ( 5) 00:30:32.406 11746.068 - 11796.480: 99.0214% ( 5) 00:30:32.406 11796.480 - 11846.892: 99.0492% ( 5) 00:30:32.406 11846.892 - 11897.305: 99.0714% ( 4) 00:30:32.406 11897.305 - 11947.717: 99.0936% ( 4) 00:30:32.406 11947.717 - 11998.129: 99.1103% ( 3) 00:30:32.406 11998.129 - 12048.542: 99.1214% ( 2) 00:30:32.406 12048.542 - 12098.954: 99.1326% ( 2) 00:30:32.406 12098.954 - 12149.366: 99.1437% ( 2) 00:30:32.406 12149.366 - 12199.778: 99.1604% ( 3) 00:30:32.406 12199.778 - 12250.191: 99.1659% ( 1) 00:30:32.406 12250.191 - 12300.603: 99.1770% ( 2) 00:30:32.406 12300.603 - 12351.015: 99.1826% ( 1) 00:30:32.406 12351.015 - 12401.428: 99.1993% ( 3) 00:30:32.406 12401.428 - 12451.840: 99.2104% ( 2) 00:30:32.406 12451.840 - 12502.252: 99.2215% ( 2) 00:30:32.406 12502.252 - 12552.665: 99.2382% ( 3) 00:30:32.407 12552.665 - 12603.077: 99.2493% ( 2) 00:30:32.407 12603.077 - 12653.489: 99.2605% ( 2) 00:30:32.407 12653.489 - 12703.902: 99.2716% ( 2) 00:30:32.407 12703.902 - 12754.314: 99.2827% ( 2) 00:30:32.407 12754.314 - 12804.726: 99.2883% ( 1) 00:30:32.407 23592.960 - 23693.785: 99.3049% ( 3) 00:30:32.407 23693.785 - 23794.609: 99.3327% ( 5) 00:30:32.407 23794.609 - 23895.434: 99.3605% ( 5) 00:30:32.407 23895.434 - 23996.258: 99.3883% ( 5) 00:30:32.407 23996.258 - 24097.083: 99.4217% ( 6) 00:30:32.407 24097.083 - 24197.908: 99.4495% ( 5) 00:30:32.407 24197.908 - 24298.732: 99.4773% ( 5) 00:30:32.407 24298.732 - 24399.557: 99.5051% ( 5) 00:30:32.407 24399.557 - 24500.382: 99.5329% ( 5) 00:30:32.407 24500.382 - 24601.206: 99.5607% ( 5) 00:30:32.407 24601.206 - 24702.031: 99.5885% ( 5) 00:30:32.407 24702.031 - 24802.855: 99.6163% ( 5) 00:30:32.407 24802.855 - 24903.680: 99.6441% ( 5) 00:30:32.407 28029.243 - 28230.892: 99.6942% ( 9) 00:30:32.407 28230.892 - 28432.542: 99.7498% ( 10) 00:30:32.407 28432.542 - 28634.191: 99.8054% ( 10) 00:30:32.407 28634.191 - 28835.840: 99.8610% ( 10) 00:30:32.407 28835.840 - 29037.489: 99.9222% ( 11) 00:30:32.407 29037.489 - 29239.138: 99.9778% ( 10) 00:30:32.407 29239.138 - 29440.788: 100.0000% ( 4) 00:30:32.407 00:30:32.407 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:32.407 ============================================================================== 00:30:32.407 Range in us Cumulative IO count 00:30:32.407 5620.972 - 5646.178: 0.0222% ( 4) 00:30:32.407 5646.178 - 5671.385: 0.0278% ( 1) 00:30:32.407 5671.385 - 5696.591: 0.0667% ( 7) 00:30:32.407 5696.591 - 5721.797: 0.1112% ( 8) 00:30:32.407 5721.797 - 5747.003: 0.1668% ( 10) 00:30:32.407 5747.003 - 5772.209: 0.2447% ( 14) 00:30:32.407 5772.209 - 5797.415: 0.3448% ( 18) 00:30:32.407 5797.415 - 5822.622: 0.4838% ( 25) 00:30:32.407 5822.622 - 5847.828: 0.5950% ( 20) 00:30:32.407 5847.828 - 5873.034: 0.7395% ( 26) 00:30:32.407 5873.034 - 5898.240: 1.0509% ( 56) 00:30:32.407 5898.240 - 5923.446: 1.4179% ( 66) 00:30:32.407 5923.446 - 5948.652: 1.9128% ( 89) 00:30:32.407 5948.652 - 5973.858: 2.4855% ( 103) 00:30:32.407 5973.858 - 5999.065: 3.2974% ( 146) 00:30:32.407 5999.065 - 6024.271: 4.3316% ( 186) 00:30:32.407 6024.271 - 6049.477: 5.5494% ( 219) 00:30:32.407 6049.477 - 6074.683: 6.9173% ( 246) 00:30:32.407 6074.683 - 6099.889: 8.4909% ( 283) 00:30:32.407 6099.889 - 6125.095: 10.2424% ( 315) 00:30:32.407 6125.095 - 6150.302: 12.1219% ( 338) 00:30:32.407 6150.302 - 6175.508: 14.1181% ( 359) 00:30:32.407 6175.508 - 6200.714: 16.0476% ( 347) 00:30:32.407 6200.714 - 6225.920: 18.1217% ( 373) 00:30:32.407 6225.920 - 6251.126: 20.1012% ( 356) 00:30:32.407 6251.126 - 6276.332: 22.1753% ( 373) 00:30:32.407 6276.332 - 6301.538: 24.2271% ( 369) 00:30:32.407 6301.538 - 6326.745: 26.3012% ( 373) 00:30:32.407 6326.745 - 6351.951: 28.3697% ( 372) 00:30:32.407 6351.951 - 6377.157: 30.5160% ( 386) 00:30:32.407 6377.157 - 6402.363: 32.6234% ( 379) 00:30:32.407 6402.363 - 6427.569: 34.7642% ( 385) 00:30:32.407 6427.569 - 6452.775: 36.9440% ( 392) 00:30:32.407 6452.775 - 6503.188: 41.3479% ( 792) 00:30:32.407 6503.188 - 6553.600: 45.6350% ( 771) 00:30:32.407 6553.600 - 6604.012: 49.9222% ( 771) 00:30:32.407 6604.012 - 6654.425: 54.1815% ( 766) 00:30:32.407 6654.425 - 6704.837: 58.3407% ( 748) 00:30:32.407 6704.837 - 6755.249: 62.2275% ( 699) 00:30:32.407 6755.249 - 6805.662: 65.7751% ( 638) 00:30:32.407 6805.662 - 6856.074: 68.8445% ( 552) 00:30:32.407 6856.074 - 6906.486: 71.1966% ( 423) 00:30:32.407 6906.486 - 6956.898: 73.0427% ( 332) 00:30:32.407 6956.898 - 7007.311: 74.2660% ( 220) 00:30:32.407 7007.311 - 7057.723: 75.3114% ( 188) 00:30:32.407 7057.723 - 7108.135: 76.1621% ( 153) 00:30:32.407 7108.135 - 7158.548: 76.9406% ( 140) 00:30:32.407 7158.548 - 7208.960: 77.6246% ( 123) 00:30:32.407 7208.960 - 7259.372: 78.1973% ( 103) 00:30:32.407 7259.372 - 7309.785: 78.6588% ( 83) 00:30:32.407 7309.785 - 7360.197: 79.0536% ( 71) 00:30:32.407 7360.197 - 7410.609: 79.4651% ( 74) 00:30:32.407 7410.609 - 7461.022: 79.8821% ( 75) 00:30:32.407 7461.022 - 7511.434: 80.2435% ( 65) 00:30:32.407 7511.434 - 7561.846: 80.5883% ( 62) 00:30:32.407 7561.846 - 7612.258: 80.9275% ( 61) 00:30:32.407 7612.258 - 7662.671: 81.2889% ( 65) 00:30:32.407 7662.671 - 7713.083: 81.6670% ( 68) 00:30:32.407 7713.083 - 7763.495: 82.0340% ( 66) 00:30:32.407 7763.495 - 7813.908: 82.3677% ( 60) 00:30:32.407 7813.908 - 7864.320: 82.7903% ( 76) 00:30:32.407 7864.320 - 7914.732: 83.1795% ( 70) 00:30:32.407 7914.732 - 7965.145: 83.5632% ( 69) 00:30:32.407 7965.145 - 8015.557: 84.0024% ( 79) 00:30:32.407 8015.557 - 8065.969: 84.3917% ( 70) 00:30:32.407 8065.969 - 8116.382: 84.8087% ( 75) 00:30:32.407 8116.382 - 8166.794: 85.2369% ( 77) 00:30:32.407 8166.794 - 8217.206: 85.6650% ( 77) 00:30:32.407 8217.206 - 8267.618: 86.1043% ( 79) 00:30:32.407 8267.618 - 8318.031: 86.5325% ( 77) 00:30:32.407 8318.031 - 8368.443: 86.9384% ( 73) 00:30:32.407 8368.443 - 8418.855: 87.3221% ( 69) 00:30:32.407 8418.855 - 8469.268: 87.7224% ( 72) 00:30:32.407 8469.268 - 8519.680: 88.0783% ( 64) 00:30:32.407 8519.680 - 8570.092: 88.4230% ( 62) 00:30:32.407 8570.092 - 8620.505: 88.6844% ( 47) 00:30:32.407 8620.505 - 8670.917: 88.9902% ( 55) 00:30:32.407 8670.917 - 8721.329: 89.2126% ( 40) 00:30:32.407 8721.329 - 8771.742: 89.4629% ( 45) 00:30:32.407 8771.742 - 8822.154: 89.6686% ( 37) 00:30:32.407 8822.154 - 8872.566: 89.9077% ( 43) 00:30:32.407 8872.566 - 8922.978: 90.1246% ( 39) 00:30:32.407 8922.978 - 8973.391: 90.3359% ( 38) 00:30:32.407 8973.391 - 9023.803: 90.5472% ( 38) 00:30:32.407 9023.803 - 9074.215: 90.7418% ( 35) 00:30:32.407 9074.215 - 9124.628: 90.9364% ( 35) 00:30:32.407 9124.628 - 9175.040: 91.0921% ( 28) 00:30:32.407 9175.040 - 9225.452: 91.2700% ( 32) 00:30:32.407 9225.452 - 9275.865: 91.4813% ( 38) 00:30:32.407 9275.865 - 9326.277: 91.7149% ( 42) 00:30:32.407 9326.277 - 9376.689: 91.9262% ( 38) 00:30:32.407 9376.689 - 9427.102: 92.1653% ( 43) 00:30:32.407 9427.102 - 9477.514: 92.3877% ( 40) 00:30:32.407 9477.514 - 9527.926: 92.6323% ( 44) 00:30:32.407 9527.926 - 9578.338: 92.8992% ( 48) 00:30:32.407 9578.338 - 9628.751: 93.1606% ( 47) 00:30:32.407 9628.751 - 9679.163: 93.4553% ( 53) 00:30:32.407 9679.163 - 9729.575: 93.7166% ( 47) 00:30:32.407 9729.575 - 9779.988: 94.0002% ( 51) 00:30:32.407 9779.988 - 9830.400: 94.3060% ( 55) 00:30:32.407 9830.400 - 9880.812: 94.6286% ( 58) 00:30:32.407 9880.812 - 9931.225: 94.9511% ( 58) 00:30:32.407 9931.225 - 9981.637: 95.2847% ( 60) 00:30:32.407 9981.637 - 10032.049: 95.5572% ( 49) 00:30:32.407 10032.049 - 10082.462: 95.8463% ( 52) 00:30:32.407 10082.462 - 10132.874: 96.0910% ( 44) 00:30:32.407 10132.874 - 10183.286: 96.2967% ( 37) 00:30:32.407 10183.286 - 10233.698: 96.4913% ( 35) 00:30:32.407 10233.698 - 10284.111: 96.6915% ( 36) 00:30:32.407 10284.111 - 10334.523: 96.8917% ( 36) 00:30:32.407 10334.523 - 10384.935: 97.1085% ( 39) 00:30:32.407 10384.935 - 10435.348: 97.3254% ( 39) 00:30:32.407 10435.348 - 10485.760: 97.4867% ( 29) 00:30:32.407 10485.760 - 10536.172: 97.6423% ( 28) 00:30:32.407 10536.172 - 10586.585: 97.8203% ( 32) 00:30:32.407 10586.585 - 10636.997: 98.0038% ( 33) 00:30:32.407 10636.997 - 10687.409: 98.1595% ( 28) 00:30:32.407 10687.409 - 10737.822: 98.3040% ( 26) 00:30:32.407 10737.822 - 10788.234: 98.4097% ( 19) 00:30:32.407 10788.234 - 10838.646: 98.4987% ( 16) 00:30:32.407 10838.646 - 10889.058: 98.5988% ( 18) 00:30:32.407 10889.058 - 10939.471: 98.6710% ( 13) 00:30:32.407 10939.471 - 10989.883: 98.7322% ( 11) 00:30:32.407 10989.883 - 11040.295: 98.7878% ( 10) 00:30:32.407 11040.295 - 11090.708: 98.8212% ( 6) 00:30:32.407 11090.708 - 11141.120: 98.8379% ( 3) 00:30:32.407 11141.120 - 11191.532: 98.8545% ( 3) 00:30:32.407 11191.532 - 11241.945: 98.8768% ( 4) 00:30:32.407 11241.945 - 11292.357: 98.9046% ( 5) 00:30:32.407 11292.357 - 11342.769: 98.9324% ( 5) 00:30:32.407 11342.769 - 11393.182: 98.9657% ( 6) 00:30:32.407 11393.182 - 11443.594: 98.9991% ( 6) 00:30:32.407 11443.594 - 11494.006: 99.0158% ( 3) 00:30:32.407 11494.006 - 11544.418: 99.0325% ( 3) 00:30:32.407 11544.418 - 11594.831: 99.0492% ( 3) 00:30:32.407 11594.831 - 11645.243: 99.0658% ( 3) 00:30:32.407 11645.243 - 11695.655: 99.0825% ( 3) 00:30:32.407 11695.655 - 11746.068: 99.0992% ( 3) 00:30:32.407 11746.068 - 11796.480: 99.1103% ( 2) 00:30:32.407 11796.480 - 11846.892: 99.1270% ( 3) 00:30:32.407 11846.892 - 11897.305: 99.1437% ( 3) 00:30:32.407 11897.305 - 11947.717: 99.1604% ( 3) 00:30:32.407 11947.717 - 11998.129: 99.1770% ( 3) 00:30:32.407 11998.129 - 12048.542: 99.1937% ( 3) 00:30:32.407 12048.542 - 12098.954: 99.2104% ( 3) 00:30:32.407 12098.954 - 12149.366: 99.2271% ( 3) 00:30:32.407 12149.366 - 12199.778: 99.2438% ( 3) 00:30:32.407 12199.778 - 12250.191: 99.2549% ( 2) 00:30:32.407 12250.191 - 12300.603: 99.2716% ( 3) 00:30:32.407 12300.603 - 12351.015: 99.2883% ( 3) 00:30:32.407 22483.889 - 22584.714: 99.2994% ( 2) 00:30:32.407 22584.714 - 22685.538: 99.3161% ( 3) 00:30:32.407 22685.538 - 22786.363: 99.3272% ( 2) 00:30:32.407 22786.363 - 22887.188: 99.3439% ( 3) 00:30:32.407 22887.188 - 22988.012: 99.3605% ( 3) 00:30:32.407 22988.012 - 23088.837: 99.3772% ( 3) 00:30:32.407 23088.837 - 23189.662: 99.3939% ( 3) 00:30:32.407 23189.662 - 23290.486: 99.4106% ( 3) 00:30:32.407 23290.486 - 23391.311: 99.4328% ( 4) 00:30:32.407 23391.311 - 23492.135: 99.4606% ( 5) 00:30:32.407 23492.135 - 23592.960: 99.4884% ( 5) 00:30:32.407 23592.960 - 23693.785: 99.5162% ( 5) 00:30:32.408 23693.785 - 23794.609: 99.5385% ( 4) 00:30:32.408 23794.609 - 23895.434: 99.5663% ( 5) 00:30:32.408 23895.434 - 23996.258: 99.5941% ( 5) 00:30:32.408 23996.258 - 24097.083: 99.5996% ( 1) 00:30:32.408 24097.083 - 24197.908: 99.6274% ( 5) 00:30:32.408 24197.908 - 24298.732: 99.6441% ( 3) 00:30:32.408 27020.997 - 27222.646: 99.6942% ( 9) 00:30:32.408 27222.646 - 27424.295: 99.7442% ( 9) 00:30:32.408 27424.295 - 27625.945: 99.8054% ( 11) 00:30:32.408 27625.945 - 27827.594: 99.8610% ( 10) 00:30:32.408 27827.594 - 28029.243: 99.9110% ( 9) 00:30:32.408 28029.243 - 28230.892: 99.9722% ( 11) 00:30:32.408 28230.892 - 28432.542: 100.0000% ( 5) 00:30:32.408 00:30:32.408 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:32.408 ============================================================================== 00:30:32.408 Range in us Cumulative IO count 00:30:32.408 5646.178 - 5671.385: 0.0167% ( 3) 00:30:32.408 5671.385 - 5696.591: 0.0667% ( 9) 00:30:32.408 5696.591 - 5721.797: 0.1112% ( 8) 00:30:32.408 5721.797 - 5747.003: 0.1668% ( 10) 00:30:32.408 5747.003 - 5772.209: 0.2391% ( 13) 00:30:32.408 5772.209 - 5797.415: 0.3114% ( 13) 00:30:32.408 5797.415 - 5822.622: 0.4170% ( 19) 00:30:32.408 5822.622 - 5847.828: 0.5560% ( 25) 00:30:32.408 5847.828 - 5873.034: 0.7229% ( 30) 00:30:32.408 5873.034 - 5898.240: 1.0676% ( 62) 00:30:32.408 5898.240 - 5923.446: 1.4791% ( 74) 00:30:32.408 5923.446 - 5948.652: 1.9462% ( 84) 00:30:32.408 5948.652 - 5973.858: 2.5968% ( 117) 00:30:32.408 5973.858 - 5999.065: 3.3141% ( 129) 00:30:32.408 5999.065 - 6024.271: 4.3205% ( 181) 00:30:32.408 6024.271 - 6049.477: 5.5605% ( 223) 00:30:32.408 6049.477 - 6074.683: 6.8783% ( 237) 00:30:32.408 6074.683 - 6099.889: 8.2129% ( 240) 00:30:32.408 6099.889 - 6125.095: 10.0534% ( 331) 00:30:32.408 6125.095 - 6150.302: 12.0107% ( 352) 00:30:32.408 6150.302 - 6175.508: 14.0013% ( 358) 00:30:32.408 6175.508 - 6200.714: 15.9364% ( 348) 00:30:32.408 6200.714 - 6225.920: 18.0661% ( 383) 00:30:32.408 6225.920 - 6251.126: 20.1512% ( 375) 00:30:32.408 6251.126 - 6276.332: 22.1252% ( 355) 00:30:32.408 6276.332 - 6301.538: 24.0658% ( 349) 00:30:32.408 6301.538 - 6326.745: 26.1065% ( 367) 00:30:32.408 6326.745 - 6351.951: 28.1806% ( 373) 00:30:32.408 6351.951 - 6377.157: 30.1991% ( 363) 00:30:32.408 6377.157 - 6402.363: 32.3677% ( 390) 00:30:32.408 6402.363 - 6427.569: 34.5474% ( 392) 00:30:32.408 6427.569 - 6452.775: 36.6826% ( 384) 00:30:32.408 6452.775 - 6503.188: 41.1199% ( 798) 00:30:32.408 6503.188 - 6553.600: 45.5794% ( 802) 00:30:32.408 6553.600 - 6604.012: 49.9611% ( 788) 00:30:32.408 6604.012 - 6654.425: 54.1926% ( 761) 00:30:32.408 6654.425 - 6704.837: 58.2963% ( 738) 00:30:32.408 6704.837 - 6755.249: 62.1386% ( 691) 00:30:32.408 6755.249 - 6805.662: 65.6584% ( 633) 00:30:32.408 6805.662 - 6856.074: 68.8278% ( 570) 00:30:32.408 6856.074 - 6906.486: 71.2689% ( 439) 00:30:32.408 6906.486 - 6956.898: 73.0205% ( 315) 00:30:32.408 6956.898 - 7007.311: 74.2660% ( 224) 00:30:32.408 7007.311 - 7057.723: 75.2113% ( 170) 00:30:32.408 7057.723 - 7108.135: 76.0787% ( 156) 00:30:32.408 7108.135 - 7158.548: 76.9128% ( 150) 00:30:32.408 7158.548 - 7208.960: 77.5690% ( 118) 00:30:32.408 7208.960 - 7259.372: 78.1361% ( 102) 00:30:32.408 7259.372 - 7309.785: 78.5532% ( 75) 00:30:32.408 7309.785 - 7360.197: 79.0314% ( 86) 00:30:32.408 7360.197 - 7410.609: 79.4317% ( 72) 00:30:32.408 7410.609 - 7461.022: 79.8210% ( 70) 00:30:32.408 7461.022 - 7511.434: 80.1657% ( 62) 00:30:32.408 7511.434 - 7561.846: 80.5105% ( 62) 00:30:32.408 7561.846 - 7612.258: 80.8385% ( 59) 00:30:32.408 7612.258 - 7662.671: 81.1555% ( 57) 00:30:32.408 7662.671 - 7713.083: 81.5169% ( 65) 00:30:32.408 7713.083 - 7763.495: 81.8450% ( 59) 00:30:32.408 7763.495 - 7813.908: 82.2008% ( 64) 00:30:32.408 7813.908 - 7864.320: 82.6123% ( 74) 00:30:32.408 7864.320 - 7914.732: 82.9460% ( 60) 00:30:32.408 7914.732 - 7965.145: 83.2740% ( 59) 00:30:32.408 7965.145 - 8015.557: 83.7077% ( 78) 00:30:32.408 8015.557 - 8065.969: 84.0914% ( 69) 00:30:32.408 8065.969 - 8116.382: 84.4973% ( 73) 00:30:32.408 8116.382 - 8166.794: 84.9144% ( 75) 00:30:32.408 8166.794 - 8217.206: 85.3592% ( 80) 00:30:32.408 8217.206 - 8267.618: 85.7596% ( 72) 00:30:32.408 8267.618 - 8318.031: 86.2100% ( 81) 00:30:32.408 8318.031 - 8368.443: 86.6659% ( 82) 00:30:32.408 8368.443 - 8418.855: 87.0830% ( 75) 00:30:32.408 8418.855 - 8469.268: 87.4222% ( 61) 00:30:32.408 8469.268 - 8519.680: 87.7836% ( 65) 00:30:32.408 8519.680 - 8570.092: 88.1617% ( 68) 00:30:32.408 8570.092 - 8620.505: 88.5009% ( 61) 00:30:32.408 8620.505 - 8670.917: 88.8012% ( 54) 00:30:32.408 8670.917 - 8721.329: 89.1403% ( 61) 00:30:32.408 8721.329 - 8771.742: 89.4184% ( 50) 00:30:32.408 8771.742 - 8822.154: 89.7631% ( 62) 00:30:32.408 8822.154 - 8872.566: 89.9744% ( 38) 00:30:32.408 8872.566 - 8922.978: 90.1968% ( 40) 00:30:32.408 8922.978 - 8973.391: 90.4248% ( 41) 00:30:32.408 8973.391 - 9023.803: 90.6584% ( 42) 00:30:32.408 9023.803 - 9074.215: 90.8697% ( 38) 00:30:32.408 9074.215 - 9124.628: 91.0976% ( 41) 00:30:32.408 9124.628 - 9175.040: 91.3645% ( 48) 00:30:32.408 9175.040 - 9225.452: 91.5981% ( 42) 00:30:32.408 9225.452 - 9275.865: 91.8261% ( 41) 00:30:32.408 9275.865 - 9326.277: 92.0485% ( 40) 00:30:32.408 9326.277 - 9376.689: 92.3043% ( 46) 00:30:32.408 9376.689 - 9427.102: 92.5211% ( 39) 00:30:32.408 9427.102 - 9477.514: 92.7714% ( 45) 00:30:32.408 9477.514 - 9527.926: 93.0549% ( 51) 00:30:32.408 9527.926 - 9578.338: 93.3608% ( 55) 00:30:32.408 9578.338 - 9628.751: 93.6221% ( 47) 00:30:32.408 9628.751 - 9679.163: 93.8835% ( 47) 00:30:32.408 9679.163 - 9729.575: 94.1448% ( 47) 00:30:32.408 9729.575 - 9779.988: 94.4339% ( 52) 00:30:32.408 9779.988 - 9830.400: 94.7398% ( 55) 00:30:32.408 9830.400 - 9880.812: 95.0011% ( 47) 00:30:32.408 9880.812 - 9931.225: 95.2347% ( 42) 00:30:32.408 9931.225 - 9981.637: 95.4738% ( 43) 00:30:32.408 9981.637 - 10032.049: 95.6851% ( 38) 00:30:32.408 10032.049 - 10082.462: 95.9075% ( 40) 00:30:32.408 10082.462 - 10132.874: 96.1466% ( 43) 00:30:32.408 10132.874 - 10183.286: 96.3857% ( 43) 00:30:32.408 10183.286 - 10233.698: 96.6025% ( 39) 00:30:32.408 10233.698 - 10284.111: 96.8472% ( 44) 00:30:32.408 10284.111 - 10334.523: 97.0363% ( 34) 00:30:32.408 10334.523 - 10384.935: 97.2031% ( 30) 00:30:32.408 10384.935 - 10435.348: 97.3476% ( 26) 00:30:32.408 10435.348 - 10485.760: 97.4644% ( 21) 00:30:32.408 10485.760 - 10536.172: 97.5923% ( 23) 00:30:32.408 10536.172 - 10586.585: 97.6813% ( 16) 00:30:32.408 10586.585 - 10636.997: 97.7869% ( 19) 00:30:32.408 10636.997 - 10687.409: 97.8648% ( 14) 00:30:32.408 10687.409 - 10737.822: 97.9649% ( 18) 00:30:32.408 10737.822 - 10788.234: 98.0761% ( 20) 00:30:32.408 10788.234 - 10838.646: 98.1817% ( 19) 00:30:32.408 10838.646 - 10889.058: 98.2818% ( 18) 00:30:32.408 10889.058 - 10939.471: 98.3763% ( 17) 00:30:32.408 10939.471 - 10989.883: 98.4597% ( 15) 00:30:32.408 10989.883 - 11040.295: 98.5487% ( 16) 00:30:32.408 11040.295 - 11090.708: 98.6321% ( 15) 00:30:32.408 11090.708 - 11141.120: 98.6933% ( 11) 00:30:32.408 11141.120 - 11191.532: 98.7489% ( 10) 00:30:32.408 11191.532 - 11241.945: 98.8212% ( 13) 00:30:32.408 11241.945 - 11292.357: 98.8712% ( 9) 00:30:32.408 11292.357 - 11342.769: 98.9213% ( 9) 00:30:32.408 11342.769 - 11393.182: 98.9602% ( 7) 00:30:32.408 11393.182 - 11443.594: 99.0047% ( 8) 00:30:32.408 11443.594 - 11494.006: 99.0436% ( 7) 00:30:32.408 11494.006 - 11544.418: 99.0547% ( 2) 00:30:32.408 11544.418 - 11594.831: 99.0714% ( 3) 00:30:32.408 11594.831 - 11645.243: 99.0881% ( 3) 00:30:32.408 11645.243 - 11695.655: 99.1048% ( 3) 00:30:32.408 11695.655 - 11746.068: 99.1214% ( 3) 00:30:32.408 11746.068 - 11796.480: 99.1381% ( 3) 00:30:32.408 11796.480 - 11846.892: 99.1548% ( 3) 00:30:32.408 11846.892 - 11897.305: 99.1715% ( 3) 00:30:32.408 11897.305 - 11947.717: 99.1882% ( 3) 00:30:32.408 11947.717 - 11998.129: 99.2048% ( 3) 00:30:32.408 11998.129 - 12048.542: 99.2215% ( 3) 00:30:32.408 12048.542 - 12098.954: 99.2327% ( 2) 00:30:32.408 12098.954 - 12149.366: 99.2493% ( 3) 00:30:32.408 12149.366 - 12199.778: 99.2660% ( 3) 00:30:32.408 12199.778 - 12250.191: 99.2771% ( 2) 00:30:32.408 12250.191 - 12300.603: 99.2883% ( 2) 00:30:32.408 21374.818 - 21475.643: 99.3105% ( 4) 00:30:32.408 21475.643 - 21576.468: 99.3383% ( 5) 00:30:32.408 21576.468 - 21677.292: 99.3661% ( 5) 00:30:32.408 21677.292 - 21778.117: 99.3939% ( 5) 00:30:32.408 21778.117 - 21878.942: 99.4161% ( 4) 00:30:32.408 21878.942 - 21979.766: 99.4440% ( 5) 00:30:32.408 21979.766 - 22080.591: 99.4718% ( 5) 00:30:32.408 22080.591 - 22181.415: 99.4996% ( 5) 00:30:32.408 22181.415 - 22282.240: 99.5218% ( 4) 00:30:32.408 22282.240 - 22383.065: 99.5552% ( 6) 00:30:32.408 22383.065 - 22483.889: 99.5830% ( 5) 00:30:32.408 22483.889 - 22584.714: 99.6052% ( 4) 00:30:32.408 22584.714 - 22685.538: 99.6386% ( 6) 00:30:32.408 22685.538 - 22786.363: 99.6441% ( 1) 00:30:32.408 25609.452 - 25710.277: 99.6664% ( 4) 00:30:32.408 25710.277 - 25811.102: 99.6942% ( 5) 00:30:32.408 25811.102 - 26012.751: 99.7498% ( 10) 00:30:32.408 26012.751 - 26214.400: 99.8109% ( 11) 00:30:32.408 26214.400 - 26416.049: 99.8610% ( 9) 00:30:32.408 26416.049 - 26617.698: 99.9166% ( 10) 00:30:32.408 26617.698 - 26819.348: 99.9722% ( 10) 00:30:32.408 26819.348 - 27020.997: 100.0000% ( 5) 00:30:32.408 00:30:32.408 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:32.408 ============================================================================== 00:30:32.408 Range in us Cumulative IO count 00:30:32.408 5671.385 - 5696.591: 0.0222% ( 4) 00:30:32.408 5696.591 - 5721.797: 0.0890% ( 12) 00:30:32.409 5721.797 - 5747.003: 0.1446% ( 10) 00:30:32.409 5747.003 - 5772.209: 0.1946% ( 9) 00:30:32.409 5772.209 - 5797.415: 0.3003% ( 19) 00:30:32.409 5797.415 - 5822.622: 0.3892% ( 16) 00:30:32.409 5822.622 - 5847.828: 0.5783% ( 34) 00:30:32.409 5847.828 - 5873.034: 0.8118% ( 42) 00:30:32.409 5873.034 - 5898.240: 1.1177% ( 55) 00:30:32.409 5898.240 - 5923.446: 1.5069% ( 70) 00:30:32.409 5923.446 - 5948.652: 1.9239% ( 75) 00:30:32.409 5948.652 - 5973.858: 2.5745% ( 117) 00:30:32.409 5973.858 - 5999.065: 3.3085% ( 132) 00:30:32.409 5999.065 - 6024.271: 4.3094% ( 180) 00:30:32.409 6024.271 - 6049.477: 5.3770% ( 192) 00:30:32.409 6049.477 - 6074.683: 6.7727% ( 251) 00:30:32.409 6074.683 - 6099.889: 8.5409% ( 318) 00:30:32.409 6099.889 - 6125.095: 10.2536% ( 308) 00:30:32.409 6125.095 - 6150.302: 11.8883% ( 294) 00:30:32.409 6150.302 - 6175.508: 13.7956% ( 343) 00:30:32.409 6175.508 - 6200.714: 15.7585% ( 353) 00:30:32.409 6200.714 - 6225.920: 17.8325% ( 373) 00:30:32.409 6225.920 - 6251.126: 20.0678% ( 402) 00:30:32.409 6251.126 - 6276.332: 22.1197% ( 369) 00:30:32.409 6276.332 - 6301.538: 24.0881% ( 354) 00:30:32.409 6301.538 - 6326.745: 26.1121% ( 364) 00:30:32.409 6326.745 - 6351.951: 28.1417% ( 365) 00:30:32.409 6351.951 - 6377.157: 30.2213% ( 374) 00:30:32.409 6377.157 - 6402.363: 32.3565% ( 384) 00:30:32.409 6402.363 - 6427.569: 34.6085% ( 405) 00:30:32.409 6427.569 - 6452.775: 36.7883% ( 392) 00:30:32.409 6452.775 - 6503.188: 41.2533% ( 803) 00:30:32.409 6503.188 - 6553.600: 45.5683% ( 776) 00:30:32.409 6553.600 - 6604.012: 49.8499% ( 770) 00:30:32.409 6604.012 - 6654.425: 54.1426% ( 772) 00:30:32.409 6654.425 - 6704.837: 58.3129% ( 750) 00:30:32.409 6704.837 - 6755.249: 62.0774% ( 677) 00:30:32.409 6755.249 - 6805.662: 65.4860% ( 613) 00:30:32.409 6805.662 - 6856.074: 68.6666% ( 572) 00:30:32.409 6856.074 - 6906.486: 71.1021% ( 438) 00:30:32.409 6906.486 - 6956.898: 72.8314% ( 311) 00:30:32.409 6956.898 - 7007.311: 74.0158% ( 213) 00:30:32.409 7007.311 - 7057.723: 74.9388% ( 166) 00:30:32.409 7057.723 - 7108.135: 75.7618% ( 148) 00:30:32.409 7108.135 - 7158.548: 76.5625% ( 144) 00:30:32.409 7158.548 - 7208.960: 77.3465% ( 141) 00:30:32.409 7208.960 - 7259.372: 77.9193% ( 103) 00:30:32.409 7259.372 - 7309.785: 78.4308% ( 92) 00:30:32.409 7309.785 - 7360.197: 78.9368% ( 91) 00:30:32.409 7360.197 - 7410.609: 79.4373% ( 90) 00:30:32.409 7410.609 - 7461.022: 79.9322% ( 89) 00:30:32.409 7461.022 - 7511.434: 80.3770% ( 80) 00:30:32.409 7511.434 - 7561.846: 80.7551% ( 68) 00:30:32.409 7561.846 - 7612.258: 81.0776% ( 58) 00:30:32.409 7612.258 - 7662.671: 81.3723% ( 53) 00:30:32.409 7662.671 - 7713.083: 81.7115% ( 61) 00:30:32.409 7713.083 - 7763.495: 82.1619% ( 81) 00:30:32.409 7763.495 - 7813.908: 82.5067% ( 62) 00:30:32.409 7813.908 - 7864.320: 82.9015% ( 71) 00:30:32.409 7864.320 - 7914.732: 83.2295% ( 59) 00:30:32.409 7914.732 - 7965.145: 83.5965% ( 66) 00:30:32.409 7965.145 - 8015.557: 83.9357% ( 61) 00:30:32.409 8015.557 - 8065.969: 84.2749% ( 61) 00:30:32.409 8065.969 - 8116.382: 84.5863% ( 56) 00:30:32.409 8116.382 - 8166.794: 84.9922% ( 73) 00:30:32.409 8166.794 - 8217.206: 85.3258% ( 60) 00:30:32.409 8217.206 - 8267.618: 85.6428% ( 57) 00:30:32.409 8267.618 - 8318.031: 85.9319% ( 52) 00:30:32.409 8318.031 - 8368.443: 86.2378% ( 55) 00:30:32.409 8368.443 - 8418.855: 86.5603% ( 58) 00:30:32.409 8418.855 - 8469.268: 86.9050% ( 62) 00:30:32.409 8469.268 - 8519.680: 87.2553% ( 63) 00:30:32.409 8519.680 - 8570.092: 87.6390% ( 69) 00:30:32.409 8570.092 - 8620.505: 87.9337% ( 53) 00:30:32.409 8620.505 - 8670.917: 88.2562% ( 58) 00:30:32.409 8670.917 - 8721.329: 88.6343% ( 68) 00:30:32.409 8721.329 - 8771.742: 88.9735% ( 61) 00:30:32.409 8771.742 - 8822.154: 89.3072% ( 60) 00:30:32.409 8822.154 - 8872.566: 89.6964% ( 70) 00:30:32.409 8872.566 - 8922.978: 90.0189% ( 58) 00:30:32.409 8922.978 - 8973.391: 90.3414% ( 58) 00:30:32.409 8973.391 - 9023.803: 90.6528% ( 56) 00:30:32.409 9023.803 - 9074.215: 90.9475% ( 53) 00:30:32.409 9074.215 - 9124.628: 91.2478% ( 54) 00:30:32.409 9124.628 - 9175.040: 91.5480% ( 54) 00:30:32.409 9175.040 - 9225.452: 91.8149% ( 48) 00:30:32.409 9225.452 - 9275.865: 92.1263% ( 56) 00:30:32.409 9275.865 - 9326.277: 92.4266% ( 54) 00:30:32.409 9326.277 - 9376.689: 92.6879% ( 47) 00:30:32.409 9376.689 - 9427.102: 92.9604% ( 49) 00:30:32.409 9427.102 - 9477.514: 93.1995% ( 43) 00:30:32.409 9477.514 - 9527.926: 93.4553% ( 46) 00:30:32.409 9527.926 - 9578.338: 93.6944% ( 43) 00:30:32.409 9578.338 - 9628.751: 93.9335% ( 43) 00:30:32.409 9628.751 - 9679.163: 94.1670% ( 42) 00:30:32.409 9679.163 - 9729.575: 94.4284% ( 47) 00:30:32.409 9729.575 - 9779.988: 94.6564% ( 41) 00:30:32.409 9779.988 - 9830.400: 94.8899% ( 42) 00:30:32.409 9830.400 - 9880.812: 95.1012% ( 38) 00:30:32.409 9880.812 - 9931.225: 95.3181% ( 39) 00:30:32.409 9931.225 - 9981.637: 95.5961% ( 50) 00:30:32.409 9981.637 - 10032.049: 95.8741% ( 50) 00:30:32.409 10032.049 - 10082.462: 96.0743% ( 36) 00:30:32.409 10082.462 - 10132.874: 96.3023% ( 41) 00:30:32.409 10132.874 - 10183.286: 96.5136% ( 38) 00:30:32.409 10183.286 - 10233.698: 96.7193% ( 37) 00:30:32.409 10233.698 - 10284.111: 96.8917% ( 31) 00:30:32.409 10284.111 - 10334.523: 97.0585% ( 30) 00:30:32.409 10334.523 - 10384.935: 97.1975% ( 25) 00:30:32.409 10384.935 - 10435.348: 97.3143% ( 21) 00:30:32.409 10435.348 - 10485.760: 97.4310% ( 21) 00:30:32.409 10485.760 - 10536.172: 97.5311% ( 18) 00:30:32.409 10536.172 - 10586.585: 97.6479% ( 21) 00:30:32.409 10586.585 - 10636.997: 97.7647% ( 21) 00:30:32.409 10636.997 - 10687.409: 97.8536% ( 16) 00:30:32.409 10687.409 - 10737.822: 97.9204% ( 12) 00:30:32.409 10737.822 - 10788.234: 97.9927% ( 13) 00:30:32.409 10788.234 - 10838.646: 98.0538% ( 11) 00:30:32.409 10838.646 - 10889.058: 98.1150% ( 11) 00:30:32.409 10889.058 - 10939.471: 98.1706% ( 10) 00:30:32.409 10939.471 - 10989.883: 98.2151% ( 8) 00:30:32.409 10989.883 - 11040.295: 98.2540% ( 7) 00:30:32.409 11040.295 - 11090.708: 98.3207% ( 12) 00:30:32.409 11090.708 - 11141.120: 98.4041% ( 15) 00:30:32.409 11141.120 - 11191.532: 98.4931% ( 16) 00:30:32.409 11191.532 - 11241.945: 98.5821% ( 16) 00:30:32.409 11241.945 - 11292.357: 98.6432% ( 11) 00:30:32.409 11292.357 - 11342.769: 98.7211% ( 14) 00:30:32.409 11342.769 - 11393.182: 98.7823% ( 11) 00:30:32.409 11393.182 - 11443.594: 98.8434% ( 11) 00:30:32.409 11443.594 - 11494.006: 98.9046% ( 11) 00:30:32.409 11494.006 - 11544.418: 98.9435% ( 7) 00:30:32.409 11544.418 - 11594.831: 98.9824% ( 7) 00:30:32.409 11594.831 - 11645.243: 99.0158% ( 6) 00:30:32.409 11645.243 - 11695.655: 99.0547% ( 7) 00:30:32.409 11695.655 - 11746.068: 99.0992% ( 8) 00:30:32.409 11746.068 - 11796.480: 99.1381% ( 7) 00:30:32.409 11796.480 - 11846.892: 99.1770% ( 7) 00:30:32.409 11846.892 - 11897.305: 99.2048% ( 5) 00:30:32.409 11897.305 - 11947.717: 99.2215% ( 3) 00:30:32.409 11947.717 - 11998.129: 99.2382% ( 3) 00:30:32.409 11998.129 - 12048.542: 99.2549% ( 3) 00:30:32.409 12048.542 - 12098.954: 99.2716% ( 3) 00:30:32.409 12098.954 - 12149.366: 99.2883% ( 3) 00:30:32.409 19963.274 - 20064.098: 99.3049% ( 3) 00:30:32.409 20064.098 - 20164.923: 99.3327% ( 5) 00:30:32.409 20164.923 - 20265.748: 99.3605% ( 5) 00:30:32.409 20265.748 - 20366.572: 99.3828% ( 4) 00:30:32.409 20366.572 - 20467.397: 99.4106% ( 5) 00:30:32.409 20467.397 - 20568.222: 99.4384% ( 5) 00:30:32.409 20568.222 - 20669.046: 99.4662% ( 5) 00:30:32.409 20669.046 - 20769.871: 99.4940% ( 5) 00:30:32.409 20769.871 - 20870.695: 99.5218% ( 5) 00:30:32.409 20870.695 - 20971.520: 99.5440% ( 4) 00:30:32.409 20971.520 - 21072.345: 99.5718% ( 5) 00:30:32.409 21072.345 - 21173.169: 99.5996% ( 5) 00:30:32.409 21173.169 - 21273.994: 99.6274% ( 5) 00:30:32.409 21273.994 - 21374.818: 99.6441% ( 3) 00:30:32.409 24197.908 - 24298.732: 99.6608% ( 3) 00:30:32.409 24298.732 - 24399.557: 99.6886% ( 5) 00:30:32.409 24399.557 - 24500.382: 99.7164% ( 5) 00:30:32.409 24500.382 - 24601.206: 99.7442% ( 5) 00:30:32.409 24601.206 - 24702.031: 99.7720% ( 5) 00:30:32.409 24702.031 - 24802.855: 99.7943% ( 4) 00:30:32.410 24802.855 - 24903.680: 99.8221% ( 5) 00:30:32.410 24903.680 - 25004.505: 99.8554% ( 6) 00:30:32.410 25004.505 - 25105.329: 99.8832% ( 5) 00:30:32.410 25105.329 - 25206.154: 99.9110% ( 5) 00:30:32.410 25206.154 - 25306.978: 99.9388% ( 5) 00:30:32.410 25306.978 - 25407.803: 99.9666% ( 5) 00:30:32.410 25407.803 - 25508.628: 99.9944% ( 5) 00:30:32.410 25508.628 - 25609.452: 100.0000% ( 1) 00:30:32.410 00:30:32.410 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:32.410 ============================================================================== 00:30:32.410 Range in us Cumulative IO count 00:30:32.410 5671.385 - 5696.591: 0.0222% ( 4) 00:30:32.410 5696.591 - 5721.797: 0.0443% ( 4) 00:30:32.410 5721.797 - 5747.003: 0.0831% ( 7) 00:30:32.410 5747.003 - 5772.209: 0.1551% ( 13) 00:30:32.410 5772.209 - 5797.415: 0.2438% ( 16) 00:30:32.410 5797.415 - 5822.622: 0.3989% ( 28) 00:30:32.410 5822.622 - 5847.828: 0.5652% ( 30) 00:30:32.410 5847.828 - 5873.034: 0.7923% ( 41) 00:30:32.410 5873.034 - 5898.240: 1.0749% ( 51) 00:30:32.410 5898.240 - 5923.446: 1.4794% ( 73) 00:30:32.410 5923.446 - 5948.652: 1.9781% ( 90) 00:30:32.410 5948.652 - 5973.858: 2.4878% ( 92) 00:30:32.410 5973.858 - 5999.065: 3.1693% ( 123) 00:30:32.410 5999.065 - 6024.271: 4.1445% ( 176) 00:30:32.410 6024.271 - 6049.477: 5.3912% ( 225) 00:30:32.410 6049.477 - 6074.683: 6.7320% ( 242) 00:30:32.410 6074.683 - 6099.889: 8.1893% ( 263) 00:30:32.410 6099.889 - 6125.095: 9.8903% ( 307) 00:30:32.410 6125.095 - 6150.302: 11.7132% ( 329) 00:30:32.410 6150.302 - 6175.508: 13.6746% ( 354) 00:30:32.410 6175.508 - 6200.714: 15.6361% ( 354) 00:30:32.410 6200.714 - 6225.920: 17.6363% ( 361) 00:30:32.410 6225.920 - 6251.126: 19.6698% ( 367) 00:30:32.410 6251.126 - 6276.332: 21.6922% ( 365) 00:30:32.410 6276.332 - 6301.538: 23.6702% ( 357) 00:30:32.410 6301.538 - 6326.745: 25.6926% ( 365) 00:30:32.410 6326.745 - 6351.951: 27.6984% ( 362) 00:30:32.410 6351.951 - 6377.157: 29.7928% ( 378) 00:30:32.410 6377.157 - 6402.363: 31.9204% ( 384) 00:30:32.410 6402.363 - 6427.569: 33.9871% ( 373) 00:30:32.410 6427.569 - 6452.775: 36.1702% ( 394) 00:30:32.410 6452.775 - 6503.188: 40.5585% ( 792) 00:30:32.410 6503.188 - 6553.600: 44.8083% ( 767) 00:30:32.410 6553.600 - 6604.012: 48.9805% ( 753) 00:30:32.410 6604.012 - 6654.425: 53.1084% ( 745) 00:30:32.410 6654.425 - 6704.837: 57.1531% ( 730) 00:30:32.410 6704.837 - 6755.249: 60.9707% ( 689) 00:30:32.410 6755.249 - 6805.662: 64.5224% ( 641) 00:30:32.410 6805.662 - 6856.074: 67.6474% ( 564) 00:30:32.410 6856.074 - 6906.486: 70.1961% ( 460) 00:30:32.410 6906.486 - 6956.898: 72.1077% ( 345) 00:30:32.410 6956.898 - 7007.311: 73.4763% ( 247) 00:30:32.410 7007.311 - 7057.723: 74.4847% ( 182) 00:30:32.410 7057.723 - 7108.135: 75.3657% ( 159) 00:30:32.410 7108.135 - 7158.548: 76.2578% ( 161) 00:30:32.410 7158.548 - 7208.960: 77.0335% ( 140) 00:30:32.410 7208.960 - 7259.372: 77.6873% ( 118) 00:30:32.410 7259.372 - 7309.785: 78.2580% ( 103) 00:30:32.410 7309.785 - 7360.197: 78.8010% ( 98) 00:30:32.410 7360.197 - 7410.609: 79.3551% ( 100) 00:30:32.410 7410.609 - 7461.022: 79.8537% ( 90) 00:30:32.410 7461.022 - 7511.434: 80.3746% ( 94) 00:30:32.410 7511.434 - 7561.846: 80.8289% ( 82) 00:30:32.410 7561.846 - 7612.258: 81.3165% ( 88) 00:30:32.410 7612.258 - 7662.671: 81.7542% ( 79) 00:30:32.410 7662.671 - 7713.083: 82.1365% ( 69) 00:30:32.410 7713.083 - 7763.495: 82.4967% ( 65) 00:30:32.410 7763.495 - 7813.908: 82.8624% ( 66) 00:30:32.410 7813.908 - 7864.320: 83.2281% ( 66) 00:30:32.410 7864.320 - 7914.732: 83.6048% ( 68) 00:30:32.410 7914.732 - 7965.145: 83.9650% ( 65) 00:30:32.410 7965.145 - 8015.557: 84.2753% ( 56) 00:30:32.410 8015.557 - 8065.969: 84.5855% ( 56) 00:30:32.410 8065.969 - 8116.382: 84.9069% ( 58) 00:30:32.410 8116.382 - 8166.794: 85.2117% ( 55) 00:30:32.410 8166.794 - 8217.206: 85.4776% ( 48) 00:30:32.410 8217.206 - 8267.618: 85.7436% ( 48) 00:30:32.410 8267.618 - 8318.031: 86.0095% ( 48) 00:30:32.410 8318.031 - 8368.443: 86.2422% ( 42) 00:30:32.410 8368.443 - 8418.855: 86.4694% ( 41) 00:30:32.410 8418.855 - 8469.268: 86.7021% ( 42) 00:30:32.410 8469.268 - 8519.680: 86.9902% ( 52) 00:30:32.410 8519.680 - 8570.092: 87.2673% ( 50) 00:30:32.410 8570.092 - 8620.505: 87.5277% ( 47) 00:30:32.410 8620.505 - 8670.917: 87.7992% ( 49) 00:30:32.410 8670.917 - 8721.329: 88.0818% ( 51) 00:30:32.410 8721.329 - 8771.742: 88.4031% ( 58) 00:30:32.410 8771.742 - 8822.154: 88.7245% ( 58) 00:30:32.410 8822.154 - 8872.566: 89.0791% ( 64) 00:30:32.410 8872.566 - 8922.978: 89.3949% ( 57) 00:30:32.410 8922.978 - 8973.391: 89.7385% ( 62) 00:30:32.410 8973.391 - 9023.803: 90.0875% ( 63) 00:30:32.410 9023.803 - 9074.215: 90.4865% ( 72) 00:30:32.410 9074.215 - 9124.628: 90.9131% ( 77) 00:30:32.410 9124.628 - 9175.040: 91.2844% ( 67) 00:30:32.410 9175.040 - 9225.452: 91.6777% ( 71) 00:30:32.410 9225.452 - 9275.865: 92.0601% ( 69) 00:30:32.410 9275.865 - 9326.277: 92.4922% ( 78) 00:30:32.410 9326.277 - 9376.689: 92.8081% ( 57) 00:30:32.410 9376.689 - 9427.102: 93.1073% ( 54) 00:30:32.410 9427.102 - 9477.514: 93.4231% ( 57) 00:30:32.410 9477.514 - 9527.926: 93.7666% ( 62) 00:30:32.410 9527.926 - 9578.338: 94.0769% ( 56) 00:30:32.410 9578.338 - 9628.751: 94.3927% ( 57) 00:30:32.410 9628.751 - 9679.163: 94.6809% ( 52) 00:30:32.410 9679.163 - 9729.575: 94.9911% ( 56) 00:30:32.410 9729.575 - 9779.988: 95.3070% ( 57) 00:30:32.410 9779.988 - 9830.400: 95.6006% ( 53) 00:30:32.410 9830.400 - 9880.812: 95.8777% ( 50) 00:30:32.410 9880.812 - 9931.225: 96.1159% ( 43) 00:30:32.410 9931.225 - 9981.637: 96.3320% ( 39) 00:30:32.410 9981.637 - 10032.049: 96.5426% ( 38) 00:30:32.410 10032.049 - 10082.462: 96.7365% ( 35) 00:30:32.410 10082.462 - 10132.874: 96.8695% ( 24) 00:30:32.410 10132.874 - 10183.286: 97.0024% ( 24) 00:30:32.410 10183.286 - 10233.698: 97.1243% ( 22) 00:30:32.410 10233.698 - 10284.111: 97.2518% ( 23) 00:30:32.410 10284.111 - 10334.523: 97.3792% ( 23) 00:30:32.410 10334.523 - 10384.935: 97.4789% ( 18) 00:30:32.410 10384.935 - 10435.348: 97.5787% ( 18) 00:30:32.410 10435.348 - 10485.760: 97.6673% ( 16) 00:30:32.410 10485.760 - 10536.172: 97.7726% ( 19) 00:30:32.410 10536.172 - 10586.585: 97.8668% ( 17) 00:30:32.410 10586.585 - 10636.997: 97.9167% ( 9) 00:30:32.410 10636.997 - 10687.409: 97.9721% ( 10) 00:30:32.410 10687.409 - 10737.822: 98.0164% ( 8) 00:30:32.410 10737.822 - 10788.234: 98.0995% ( 15) 00:30:32.410 10788.234 - 10838.646: 98.1660% ( 12) 00:30:32.410 10838.646 - 10889.058: 98.2380% ( 13) 00:30:32.410 10889.058 - 10939.471: 98.3156% ( 14) 00:30:32.410 10939.471 - 10989.883: 98.4098% ( 17) 00:30:32.410 10989.883 - 11040.295: 98.4818% ( 13) 00:30:32.410 11040.295 - 11090.708: 98.5539% ( 13) 00:30:32.410 11090.708 - 11141.120: 98.6148% ( 11) 00:30:32.410 11141.120 - 11191.532: 98.6813% ( 12) 00:30:32.410 11191.532 - 11241.945: 98.7533% ( 13) 00:30:32.410 11241.945 - 11292.357: 98.8087% ( 10) 00:30:32.410 11292.357 - 11342.769: 98.8697% ( 11) 00:30:32.410 11342.769 - 11393.182: 98.9306% ( 11) 00:30:32.410 11393.182 - 11443.594: 98.9860% ( 10) 00:30:32.410 11443.594 - 11494.006: 99.0470% ( 11) 00:30:32.410 11494.006 - 11544.418: 99.0969% ( 9) 00:30:32.410 11544.418 - 11594.831: 99.1356% ( 7) 00:30:32.410 11594.831 - 11645.243: 99.1578% ( 4) 00:30:32.410 11645.243 - 11695.655: 99.1744% ( 3) 00:30:32.410 11695.655 - 11746.068: 99.1910% ( 3) 00:30:32.410 11746.068 - 11796.480: 99.2077% ( 3) 00:30:32.410 11796.480 - 11846.892: 99.2243% ( 3) 00:30:32.410 11846.892 - 11897.305: 99.2409% ( 3) 00:30:32.410 11897.305 - 11947.717: 99.2575% ( 3) 00:30:32.410 11947.717 - 11998.129: 99.2742% ( 3) 00:30:32.410 11998.129 - 12048.542: 99.2908% ( 3) 00:30:32.410 14821.218 - 14922.043: 99.3019% ( 2) 00:30:32.410 14922.043 - 15022.868: 99.3240% ( 4) 00:30:32.410 15022.868 - 15123.692: 99.3517% ( 5) 00:30:32.410 15123.692 - 15224.517: 99.3794% ( 5) 00:30:32.410 15224.517 - 15325.342: 99.4071% ( 5) 00:30:32.410 15325.342 - 15426.166: 99.4348% ( 5) 00:30:32.410 15426.166 - 15526.991: 99.4625% ( 5) 00:30:32.410 15526.991 - 15627.815: 99.4902% ( 5) 00:30:32.410 15627.815 - 15728.640: 99.5180% ( 5) 00:30:32.410 15728.640 - 15829.465: 99.5457% ( 5) 00:30:32.410 15829.465 - 15930.289: 99.5789% ( 6) 00:30:32.410 15930.289 - 16031.114: 99.6066% ( 5) 00:30:32.410 16031.114 - 16131.938: 99.6343% ( 5) 00:30:32.410 16131.938 - 16232.763: 99.6454% ( 2) 00:30:32.410 19559.975 - 19660.800: 99.6731% ( 5) 00:30:32.410 19660.800 - 19761.625: 99.7008% ( 5) 00:30:32.410 19761.625 - 19862.449: 99.7285% ( 5) 00:30:32.410 19862.449 - 19963.274: 99.7507% ( 4) 00:30:32.410 19963.274 - 20064.098: 99.7784% ( 5) 00:30:32.410 20064.098 - 20164.923: 99.8005% ( 4) 00:30:32.410 20164.923 - 20265.748: 99.8227% ( 4) 00:30:32.410 20265.748 - 20366.572: 99.8504% ( 5) 00:30:32.410 20366.572 - 20467.397: 99.8726% ( 4) 00:30:32.410 20467.397 - 20568.222: 99.9003% ( 5) 00:30:32.410 20568.222 - 20669.046: 99.9280% ( 5) 00:30:32.410 20669.046 - 20769.871: 99.9501% ( 4) 00:30:32.410 20769.871 - 20870.695: 99.9778% ( 5) 00:30:32.410 20870.695 - 20971.520: 100.0000% ( 4) 00:30:32.410 00:30:32.410 15:55:53 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:33.346 Initializing NVMe Controllers 00:30:33.346 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:33.346 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:33.346 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:33.346 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:33.346 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:33.346 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:33.346 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:33.346 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:33.346 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:33.346 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:33.346 Initialization complete. Launching workers. 00:30:33.346 ======================================================== 00:30:33.346 Latency(us) 00:30:33.346 Device Information : IOPS MiB/s Average min max 00:30:33.346 PCIE (0000:00:10.0) NSID 1 from core 0: 15780.56 184.93 8122.07 6258.80 33820.44 00:30:33.346 PCIE (0000:00:11.0) NSID 1 from core 0: 15780.56 184.93 8109.35 6206.08 32061.93 00:30:33.346 PCIE (0000:00:13.0) NSID 1 from core 0: 15780.56 184.93 8096.49 6311.85 30773.78 00:30:33.346 PCIE (0000:00:12.0) NSID 1 from core 0: 15780.56 184.93 8083.97 6215.09 28980.32 00:30:33.346 PCIE (0000:00:12.0) NSID 2 from core 0: 15780.56 184.93 8071.31 6372.54 27257.86 00:30:33.346 PCIE (0000:00:12.0) NSID 3 from core 0: 15844.45 185.68 8026.12 6322.53 21653.36 00:30:33.346 ======================================================== 00:30:33.346 Total : 94747.23 1110.32 8084.84 6206.08 33820.44 00:30:33.346 00:30:33.346 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:33.346 ================================================================================= 00:30:33.346 1.00000% : 6604.012us 00:30:33.346 10.00000% : 6856.074us 00:30:33.346 25.00000% : 7057.723us 00:30:33.346 50.00000% : 7511.434us 00:30:33.346 75.00000% : 8922.978us 00:30:33.346 90.00000% : 9679.163us 00:30:33.346 95.00000% : 10132.874us 00:30:33.346 98.00000% : 10838.646us 00:30:33.346 99.00000% : 13107.200us 00:30:33.346 99.50000% : 28230.892us 00:30:33.346 99.90000% : 33473.772us 00:30:33.346 99.99000% : 33877.071us 00:30:33.346 99.99900% : 33877.071us 00:30:33.346 99.99990% : 33877.071us 00:30:33.346 99.99999% : 33877.071us 00:30:33.346 00:30:33.346 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:33.346 ================================================================================= 00:30:33.346 1.00000% : 6704.837us 00:30:33.346 10.00000% : 6956.898us 00:30:33.346 25.00000% : 7108.135us 00:30:33.346 50.00000% : 7410.609us 00:30:33.346 75.00000% : 8922.978us 00:30:33.346 90.00000% : 9679.163us 00:30:33.346 95.00000% : 10032.049us 00:30:33.346 98.00000% : 10737.822us 00:30:33.346 99.00000% : 13006.375us 00:30:33.346 99.50000% : 26416.049us 00:30:33.346 99.90000% : 31860.578us 00:30:33.346 99.99000% : 32062.228us 00:30:33.346 99.99900% : 32062.228us 00:30:33.346 99.99990% : 32062.228us 00:30:33.346 99.99999% : 32062.228us 00:30:33.346 00:30:33.346 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:33.346 ================================================================================= 00:30:33.346 1.00000% : 6654.425us 00:30:33.346 10.00000% : 6956.898us 00:30:33.346 25.00000% : 7108.135us 00:30:33.346 50.00000% : 7410.609us 00:30:33.346 75.00000% : 8922.978us 00:30:33.346 90.00000% : 9679.163us 00:30:33.346 95.00000% : 9981.637us 00:30:33.346 98.00000% : 10788.234us 00:30:33.346 99.00000% : 12754.314us 00:30:33.346 99.50000% : 25508.628us 00:30:33.346 99.90000% : 30449.034us 00:30:33.346 99.99000% : 30852.332us 00:30:33.346 99.99900% : 30852.332us 00:30:33.346 99.99990% : 30852.332us 00:30:33.346 99.99999% : 30852.332us 00:30:33.346 00:30:33.346 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:33.346 ================================================================================= 00:30:33.346 1.00000% : 6654.425us 00:30:33.346 10.00000% : 6956.898us 00:30:33.346 25.00000% : 7108.135us 00:30:33.346 50.00000% : 7410.609us 00:30:33.346 75.00000% : 8872.566us 00:30:33.346 90.00000% : 9679.163us 00:30:33.346 95.00000% : 9981.637us 00:30:33.346 98.00000% : 10636.997us 00:30:33.346 99.00000% : 12603.077us 00:30:33.346 99.50000% : 23794.609us 00:30:33.346 99.90000% : 28634.191us 00:30:33.346 99.99000% : 29037.489us 00:30:33.346 99.99900% : 29037.489us 00:30:33.346 99.99990% : 29037.489us 00:30:33.346 99.99999% : 29037.489us 00:30:33.346 00:30:33.346 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:33.346 ================================================================================= 00:30:33.346 1.00000% : 6704.837us 00:30:33.346 10.00000% : 6956.898us 00:30:33.346 25.00000% : 7108.135us 00:30:33.346 50.00000% : 7410.609us 00:30:33.346 75.00000% : 8872.566us 00:30:33.346 90.00000% : 9679.163us 00:30:33.346 95.00000% : 10082.462us 00:30:33.346 98.00000% : 10687.409us 00:30:33.346 99.00000% : 13107.200us 00:30:33.346 99.50000% : 21979.766us 00:30:33.346 99.90000% : 27020.997us 00:30:33.346 99.99000% : 27424.295us 00:30:33.346 99.99900% : 27424.295us 00:30:33.346 99.99990% : 27424.295us 00:30:33.346 99.99999% : 27424.295us 00:30:33.346 00:30:33.346 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:33.346 ================================================================================= 00:30:33.346 1.00000% : 6704.837us 00:30:33.346 10.00000% : 6956.898us 00:30:33.346 25.00000% : 7108.135us 00:30:33.346 50.00000% : 7410.609us 00:30:33.346 75.00000% : 8922.978us 00:30:33.346 90.00000% : 9729.575us 00:30:33.346 95.00000% : 10032.049us 00:30:33.346 98.00000% : 10737.822us 00:30:33.346 99.00000% : 13208.025us 00:30:33.346 99.50000% : 15627.815us 00:30:33.346 99.90000% : 21374.818us 00:30:33.346 99.99000% : 21677.292us 00:30:33.346 99.99900% : 21677.292us 00:30:33.346 99.99990% : 21677.292us 00:30:33.346 99.99999% : 21677.292us 00:30:33.346 00:30:33.346 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:33.346 ============================================================================== 00:30:33.346 Range in us Cumulative IO count 00:30:33.346 6251.126 - 6276.332: 0.0063% ( 1) 00:30:33.346 6276.332 - 6301.538: 0.0127% ( 1) 00:30:33.346 6301.538 - 6326.745: 0.0253% ( 2) 00:30:33.346 6326.745 - 6351.951: 0.0443% ( 3) 00:30:33.346 6377.157 - 6402.363: 0.0506% ( 1) 00:30:33.346 6402.363 - 6427.569: 0.1202% ( 11) 00:30:33.346 6427.569 - 6452.775: 0.3226% ( 32) 00:30:33.346 6452.775 - 6503.188: 0.5124% ( 30) 00:30:33.346 6503.188 - 6553.600: 0.6705% ( 25) 00:30:33.346 6553.600 - 6604.012: 1.0881% ( 66) 00:30:33.346 6604.012 - 6654.425: 2.0496% ( 152) 00:30:33.346 6654.425 - 6704.837: 3.6121% ( 247) 00:30:33.346 6704.837 - 6755.249: 5.6617% ( 324) 00:30:33.346 6755.249 - 6805.662: 8.3692% ( 428) 00:30:33.346 6805.662 - 6856.074: 11.2095% ( 449) 00:30:33.346 6856.074 - 6906.486: 15.0746% ( 611) 00:30:33.346 6906.486 - 6956.898: 19.2624% ( 662) 00:30:33.346 6956.898 - 7007.311: 23.0516% ( 599) 00:30:33.346 7007.311 - 7057.723: 26.5625% ( 555) 00:30:33.346 7057.723 - 7108.135: 29.9469% ( 535) 00:30:33.346 7108.135 - 7158.548: 33.2363% ( 520) 00:30:33.346 7158.548 - 7208.960: 36.3360% ( 490) 00:30:33.346 7208.960 - 7259.372: 38.9676% ( 416) 00:30:33.346 7259.372 - 7309.785: 41.5739% ( 412) 00:30:33.346 7309.785 - 7360.197: 43.6931% ( 335) 00:30:33.346 7360.197 - 7410.609: 46.0147% ( 367) 00:30:33.346 7410.609 - 7461.022: 48.1402% ( 336) 00:30:33.346 7461.022 - 7511.434: 50.1139% ( 312) 00:30:33.346 7511.434 - 7561.846: 52.0433% ( 305) 00:30:33.346 7561.846 - 7612.258: 53.9157% ( 296) 00:30:33.346 7612.258 - 7662.671: 55.7376% ( 288) 00:30:33.346 7662.671 - 7713.083: 57.4329% ( 268) 00:30:33.347 7713.083 - 7763.495: 58.8183% ( 219) 00:30:33.347 7763.495 - 7813.908: 59.9633% ( 181) 00:30:33.347 7813.908 - 7864.320: 61.0956% ( 179) 00:30:33.347 7864.320 - 7914.732: 62.3419% ( 197) 00:30:33.347 7914.732 - 7965.145: 63.4173% ( 170) 00:30:33.347 7965.145 - 8015.557: 64.2649% ( 134) 00:30:33.347 8015.557 - 8065.969: 64.8849% ( 98) 00:30:33.347 8065.969 - 8116.382: 65.4732% ( 93) 00:30:33.347 8116.382 - 8166.794: 66.0109% ( 85) 00:30:33.347 8166.794 - 8217.206: 66.4916% ( 76) 00:30:33.347 8217.206 - 8267.618: 66.9787% ( 77) 00:30:33.347 8267.618 - 8318.031: 67.3773% ( 63) 00:30:33.347 8318.031 - 8368.443: 67.7885% ( 65) 00:30:33.347 8368.443 - 8418.855: 68.1680% ( 60) 00:30:33.347 8418.855 - 8469.268: 68.7500% ( 92) 00:30:33.347 8469.268 - 8519.680: 69.7748% ( 162) 00:30:33.347 8519.680 - 8570.092: 70.4643% ( 109) 00:30:33.347 8570.092 - 8620.505: 71.0653% ( 95) 00:30:33.347 8620.505 - 8670.917: 71.8434% ( 123) 00:30:33.347 8670.917 - 8721.329: 72.4507% ( 96) 00:30:33.347 8721.329 - 8771.742: 73.1212% ( 106) 00:30:33.347 8771.742 - 8822.154: 73.7411% ( 98) 00:30:33.347 8822.154 - 8872.566: 74.3421% ( 95) 00:30:33.347 8872.566 - 8922.978: 75.1139% ( 122) 00:30:33.347 8922.978 - 8973.391: 76.0691% ( 151) 00:30:33.347 8973.391 - 9023.803: 76.9737% ( 143) 00:30:33.347 9023.803 - 9074.215: 77.9352% ( 152) 00:30:33.347 9074.215 - 9124.628: 78.8019% ( 137) 00:30:33.347 9124.628 - 9175.040: 79.7950% ( 157) 00:30:33.347 9175.040 - 9225.452: 80.8768% ( 171) 00:30:33.347 9225.452 - 9275.865: 81.9965% ( 177) 00:30:33.347 9275.865 - 9326.277: 82.9706% ( 154) 00:30:33.347 9326.277 - 9376.689: 84.0144% ( 165) 00:30:33.347 9376.689 - 9427.102: 85.6528% ( 259) 00:30:33.347 9427.102 - 9477.514: 86.6080% ( 151) 00:30:33.347 9477.514 - 9527.926: 87.5506% ( 149) 00:30:33.347 9527.926 - 9578.338: 88.4995% ( 150) 00:30:33.347 9578.338 - 9628.751: 89.3029% ( 127) 00:30:33.347 9628.751 - 9679.163: 90.1442% ( 133) 00:30:33.347 9679.163 - 9729.575: 90.7768% ( 100) 00:30:33.347 9729.575 - 9779.988: 91.2892% ( 81) 00:30:33.347 9779.988 - 9830.400: 91.9408% ( 103) 00:30:33.347 9830.400 - 9880.812: 92.5418% ( 95) 00:30:33.347 9880.812 - 9931.225: 93.1048% ( 89) 00:30:33.347 9931.225 - 9981.637: 93.7373% ( 100) 00:30:33.347 9981.637 - 10032.049: 94.2308% ( 78) 00:30:33.347 10032.049 - 10082.462: 94.7179% ( 77) 00:30:33.347 10082.462 - 10132.874: 95.1607% ( 70) 00:30:33.347 10132.874 - 10183.286: 95.5402% ( 60) 00:30:33.347 10183.286 - 10233.698: 95.9324% ( 62) 00:30:33.347 10233.698 - 10284.111: 96.3816% ( 71) 00:30:33.347 10284.111 - 10334.523: 96.7105% ( 52) 00:30:33.347 10334.523 - 10384.935: 96.9446% ( 37) 00:30:33.347 10384.935 - 10435.348: 97.1217% ( 28) 00:30:33.347 10435.348 - 10485.760: 97.2925% ( 27) 00:30:33.347 10485.760 - 10536.172: 97.4254% ( 21) 00:30:33.347 10536.172 - 10586.585: 97.5645% ( 22) 00:30:33.347 10586.585 - 10636.997: 97.6784% ( 18) 00:30:33.347 10636.997 - 10687.409: 97.7923% ( 18) 00:30:33.347 10687.409 - 10737.822: 97.8745% ( 13) 00:30:33.347 10737.822 - 10788.234: 97.9631% ( 14) 00:30:33.347 10788.234 - 10838.646: 98.0453% ( 13) 00:30:33.347 10838.646 - 10889.058: 98.1402% ( 15) 00:30:33.347 10889.058 - 10939.471: 98.2414% ( 16) 00:30:33.347 10939.471 - 10989.883: 98.3047% ( 10) 00:30:33.347 10989.883 - 11040.295: 98.3806% ( 12) 00:30:33.347 11040.295 - 11090.708: 98.4185% ( 6) 00:30:33.347 11090.708 - 11141.120: 98.4818% ( 10) 00:30:33.347 11141.120 - 11191.532: 98.4944% ( 2) 00:30:33.347 11191.532 - 11241.945: 98.5261% ( 5) 00:30:33.347 11241.945 - 11292.357: 98.5640% ( 6) 00:30:33.347 11292.357 - 11342.769: 98.5893% ( 4) 00:30:33.347 11342.769 - 11393.182: 98.6210% ( 5) 00:30:33.347 11393.182 - 11443.594: 98.6399% ( 3) 00:30:33.347 11443.594 - 11494.006: 98.6589% ( 3) 00:30:33.347 11494.006 - 11544.418: 98.6779% ( 3) 00:30:33.347 11544.418 - 11594.831: 98.6969% ( 3) 00:30:33.347 11594.831 - 11645.243: 98.7222% ( 4) 00:30:33.347 11645.243 - 11695.655: 98.7348% ( 2) 00:30:33.347 11695.655 - 11746.068: 98.7538% ( 3) 00:30:33.347 11746.068 - 11796.480: 98.7728% ( 3) 00:30:33.347 11796.480 - 11846.892: 98.7854% ( 2) 00:30:33.347 12502.252 - 12552.665: 98.8171% ( 5) 00:30:33.347 12552.665 - 12603.077: 98.8487% ( 5) 00:30:33.347 12603.077 - 12653.489: 98.8677% ( 3) 00:30:33.347 12653.489 - 12703.902: 98.8866% ( 3) 00:30:33.347 12703.902 - 12754.314: 98.8930% ( 1) 00:30:33.347 12754.314 - 12804.726: 98.9056% ( 2) 00:30:33.347 12804.726 - 12855.138: 98.9246% ( 3) 00:30:33.347 12855.138 - 12905.551: 98.9372% ( 2) 00:30:33.347 12905.551 - 13006.375: 98.9815% ( 7) 00:30:33.347 13006.375 - 13107.200: 99.0195% ( 6) 00:30:33.347 13107.200 - 13208.025: 99.0574% ( 6) 00:30:33.347 13208.025 - 13308.849: 99.0827% ( 4) 00:30:33.347 13308.849 - 13409.674: 99.1207% ( 6) 00:30:33.347 13409.674 - 13510.498: 99.1587% ( 6) 00:30:33.347 13510.498 - 13611.323: 99.1903% ( 5) 00:30:33.347 26819.348 - 27020.997: 99.2219% ( 5) 00:30:33.347 27020.997 - 27222.646: 99.2978% ( 12) 00:30:33.347 27222.646 - 27424.295: 99.3611% ( 10) 00:30:33.347 27424.295 - 27625.945: 99.3990% ( 6) 00:30:33.347 27625.945 - 27827.594: 99.4307% ( 5) 00:30:33.347 27827.594 - 28029.243: 99.4749% ( 7) 00:30:33.347 28029.243 - 28230.892: 99.5192% ( 7) 00:30:33.347 28230.892 - 28432.542: 99.5635% ( 7) 00:30:33.347 28432.542 - 28634.191: 99.5951% ( 5) 00:30:33.347 32062.228 - 32263.877: 99.6331% ( 6) 00:30:33.347 32263.877 - 32465.526: 99.6711% ( 6) 00:30:33.347 32465.526 - 32667.175: 99.7217% ( 8) 00:30:33.347 32667.175 - 32868.825: 99.7723% ( 8) 00:30:33.347 32868.825 - 33070.474: 99.8229% ( 8) 00:30:33.347 33070.474 - 33272.123: 99.8735% ( 8) 00:30:33.347 33272.123 - 33473.772: 99.9241% ( 8) 00:30:33.347 33473.772 - 33675.422: 99.9747% ( 8) 00:30:33.347 33675.422 - 33877.071: 100.0000% ( 4) 00:30:33.347 00:30:33.347 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:33.347 ============================================================================== 00:30:33.347 Range in us Cumulative IO count 00:30:33.347 6200.714 - 6225.920: 0.0063% ( 1) 00:30:33.347 6351.951 - 6377.157: 0.0127% ( 1) 00:30:33.347 6427.569 - 6452.775: 0.0253% ( 2) 00:30:33.347 6452.775 - 6503.188: 0.1012% ( 12) 00:30:33.347 6503.188 - 6553.600: 0.2088% ( 17) 00:30:33.347 6553.600 - 6604.012: 0.4365% ( 36) 00:30:33.347 6604.012 - 6654.425: 0.9362% ( 79) 00:30:33.347 6654.425 - 6704.837: 1.3980% ( 73) 00:30:33.347 6704.837 - 6755.249: 2.0053% ( 96) 00:30:33.347 6755.249 - 6805.662: 3.0934% ( 172) 00:30:33.347 6805.662 - 6856.074: 5.2505% ( 341) 00:30:33.347 6856.074 - 6906.486: 7.9770% ( 431) 00:30:33.347 6906.486 - 6956.898: 11.6966% ( 588) 00:30:33.347 6956.898 - 7007.311: 15.8654% ( 659) 00:30:33.347 7007.311 - 7057.723: 21.6726% ( 918) 00:30:33.347 7057.723 - 7108.135: 27.5304% ( 926) 00:30:33.347 7108.135 - 7158.548: 32.9833% ( 862) 00:30:33.347 7158.548 - 7208.960: 37.5949% ( 729) 00:30:33.347 7208.960 - 7259.372: 41.6688% ( 644) 00:30:33.347 7259.372 - 7309.785: 45.6414% ( 628) 00:30:33.347 7309.785 - 7360.197: 48.8993% ( 515) 00:30:33.347 7360.197 - 7410.609: 51.3284% ( 384) 00:30:33.347 7410.609 - 7461.022: 53.5362% ( 349) 00:30:33.347 7461.022 - 7511.434: 54.7887% ( 198) 00:30:33.347 7511.434 - 7561.846: 56.0223% ( 195) 00:30:33.347 7561.846 - 7612.258: 57.3064% ( 203) 00:30:33.347 7612.258 - 7662.671: 58.2490% ( 149) 00:30:33.347 7662.671 - 7713.083: 58.9954% ( 118) 00:30:33.347 7713.083 - 7763.495: 59.6470% ( 103) 00:30:33.347 7763.495 - 7813.908: 60.5073% ( 136) 00:30:33.347 7813.908 - 7864.320: 61.2285% ( 114) 00:30:33.347 7864.320 - 7914.732: 61.9813% ( 119) 00:30:33.347 7914.732 - 7965.145: 62.7973% ( 129) 00:30:33.347 7965.145 - 8015.557: 63.9297% ( 179) 00:30:33.347 8015.557 - 8065.969: 65.0746% ( 181) 00:30:33.347 8065.969 - 8116.382: 65.6313% ( 88) 00:30:33.347 8116.382 - 8166.794: 66.2323% ( 95) 00:30:33.347 8166.794 - 8217.206: 66.7510% ( 82) 00:30:33.347 8217.206 - 8267.618: 67.1812% ( 68) 00:30:33.347 8267.618 - 8318.031: 67.5860% ( 64) 00:30:33.347 8318.031 - 8368.443: 68.2313% ( 102) 00:30:33.347 8368.443 - 8418.855: 68.5223% ( 46) 00:30:33.347 8418.855 - 8469.268: 68.7816% ( 41) 00:30:33.347 8469.268 - 8519.680: 69.0157% ( 37) 00:30:33.347 8519.680 - 8570.092: 69.3320% ( 50) 00:30:33.347 8570.092 - 8620.505: 69.8760% ( 86) 00:30:33.347 8620.505 - 8670.917: 70.7933% ( 145) 00:30:33.347 8670.917 - 8721.329: 71.7801% ( 156) 00:30:33.347 8721.329 - 8771.742: 72.7543% ( 154) 00:30:33.347 8771.742 - 8822.154: 73.7285% ( 154) 00:30:33.347 8822.154 - 8872.566: 74.5698% ( 133) 00:30:33.347 8872.566 - 8922.978: 75.3985% ( 131) 00:30:33.347 8922.978 - 8973.391: 76.1956% ( 126) 00:30:33.347 8973.391 - 9023.803: 76.9357% ( 117) 00:30:33.347 9023.803 - 9074.215: 77.7201% ( 124) 00:30:33.347 9074.215 - 9124.628: 78.7513% ( 163) 00:30:33.347 9124.628 - 9175.040: 79.5736% ( 130) 00:30:33.347 9175.040 - 9225.452: 80.4909% ( 145) 00:30:33.347 9225.452 - 9275.865: 81.5283% ( 164) 00:30:33.347 9275.865 - 9326.277: 82.7998% ( 201) 00:30:33.347 9326.277 - 9376.689: 84.0840% ( 203) 00:30:33.347 9376.689 - 9427.102: 85.0835% ( 158) 00:30:33.347 9427.102 - 9477.514: 86.1526% ( 169) 00:30:33.347 9477.514 - 9527.926: 87.2533% ( 174) 00:30:33.347 9527.926 - 9578.338: 88.3097% ( 167) 00:30:33.347 9578.338 - 9628.751: 89.3408% ( 163) 00:30:33.347 9628.751 - 9679.163: 90.3087% ( 153) 00:30:33.347 9679.163 - 9729.575: 91.3462% ( 164) 00:30:33.347 9729.575 - 9779.988: 92.3014% ( 151) 00:30:33.347 9779.988 - 9830.400: 92.9972% ( 110) 00:30:33.347 9830.400 - 9880.812: 93.7310% ( 116) 00:30:33.347 9880.812 - 9931.225: 94.3193% ( 93) 00:30:33.347 9931.225 - 9981.637: 94.8760% ( 88) 00:30:33.348 9981.637 - 10032.049: 95.3631% ( 77) 00:30:33.348 10032.049 - 10082.462: 95.7933% ( 68) 00:30:33.348 10082.462 - 10132.874: 96.0906% ( 47) 00:30:33.348 10132.874 - 10183.286: 96.3816% ( 46) 00:30:33.348 10183.286 - 10233.698: 96.5587% ( 28) 00:30:33.348 10233.698 - 10284.111: 96.7485% ( 30) 00:30:33.348 10284.111 - 10334.523: 96.9383% ( 30) 00:30:33.348 10334.523 - 10384.935: 97.2482% ( 49) 00:30:33.348 10384.935 - 10435.348: 97.3684% ( 19) 00:30:33.348 10435.348 - 10485.760: 97.5139% ( 23) 00:30:33.348 10485.760 - 10536.172: 97.6468% ( 21) 00:30:33.348 10536.172 - 10586.585: 97.8176% ( 27) 00:30:33.348 10586.585 - 10636.997: 97.9124% ( 15) 00:30:33.348 10636.997 - 10687.409: 97.9631% ( 8) 00:30:33.348 10687.409 - 10737.822: 98.0263% ( 10) 00:30:33.348 10737.822 - 10788.234: 98.0896% ( 10) 00:30:33.348 10788.234 - 10838.646: 98.1402% ( 8) 00:30:33.348 10838.646 - 10889.058: 98.1781% ( 6) 00:30:33.348 10889.058 - 10939.471: 98.2224% ( 7) 00:30:33.348 10939.471 - 10989.883: 98.2857% ( 10) 00:30:33.348 10989.883 - 11040.295: 98.3047% ( 3) 00:30:33.348 11040.295 - 11090.708: 98.3173% ( 2) 00:30:33.348 11090.708 - 11141.120: 98.3300% ( 2) 00:30:33.348 11141.120 - 11191.532: 98.3426% ( 2) 00:30:33.348 11191.532 - 11241.945: 98.3742% ( 5) 00:30:33.348 11241.945 - 11292.357: 98.4312% ( 9) 00:30:33.348 11292.357 - 11342.769: 98.4818% ( 8) 00:30:33.348 11342.769 - 11393.182: 98.5008% ( 3) 00:30:33.348 11393.182 - 11443.594: 98.5450% ( 7) 00:30:33.348 11443.594 - 11494.006: 98.5703% ( 4) 00:30:33.348 11494.006 - 11544.418: 98.5893% ( 3) 00:30:33.348 11544.418 - 11594.831: 98.6083% ( 3) 00:30:33.348 11594.831 - 11645.243: 98.6210% ( 2) 00:30:33.348 11645.243 - 11695.655: 98.6463% ( 4) 00:30:33.348 11695.655 - 11746.068: 98.6652% ( 3) 00:30:33.348 11746.068 - 11796.480: 98.6779% ( 2) 00:30:33.348 11796.480 - 11846.892: 98.7032% ( 4) 00:30:33.348 11846.892 - 11897.305: 98.7222% ( 3) 00:30:33.348 11897.305 - 11947.717: 98.7475% ( 4) 00:30:33.348 11947.717 - 11998.129: 98.7664% ( 3) 00:30:33.348 11998.129 - 12048.542: 98.7854% ( 3) 00:30:33.348 12502.252 - 12552.665: 98.8360% ( 8) 00:30:33.348 12552.665 - 12603.077: 98.8487% ( 2) 00:30:33.348 12603.077 - 12653.489: 98.8613% ( 2) 00:30:33.348 12653.489 - 12703.902: 98.8866% ( 4) 00:30:33.348 12703.902 - 12754.314: 98.9056% ( 3) 00:30:33.348 12754.314 - 12804.726: 98.9309% ( 4) 00:30:33.348 12804.726 - 12855.138: 98.9436% ( 2) 00:30:33.348 12855.138 - 12905.551: 98.9689% ( 4) 00:30:33.348 12905.551 - 13006.375: 99.0132% ( 7) 00:30:33.348 13006.375 - 13107.200: 99.0511% ( 6) 00:30:33.348 13107.200 - 13208.025: 99.0954% ( 7) 00:30:33.348 13208.025 - 13308.849: 99.1334% ( 6) 00:30:33.348 13308.849 - 13409.674: 99.1713% ( 6) 00:30:33.348 13409.674 - 13510.498: 99.1903% ( 3) 00:30:33.348 25105.329 - 25206.154: 99.2093% ( 3) 00:30:33.348 25206.154 - 25306.978: 99.2346% ( 4) 00:30:33.348 25306.978 - 25407.803: 99.2599% ( 4) 00:30:33.348 25407.803 - 25508.628: 99.2852% ( 4) 00:30:33.348 25508.628 - 25609.452: 99.3105% ( 4) 00:30:33.348 25609.452 - 25710.277: 99.3358% ( 4) 00:30:33.348 25710.277 - 25811.102: 99.3611% ( 4) 00:30:33.348 25811.102 - 26012.751: 99.4117% ( 8) 00:30:33.348 26012.751 - 26214.400: 99.4560% ( 7) 00:30:33.348 26214.400 - 26416.049: 99.5066% ( 8) 00:30:33.348 26416.049 - 26617.698: 99.5572% ( 8) 00:30:33.348 26617.698 - 26819.348: 99.5951% ( 6) 00:30:33.348 30247.385 - 30449.034: 99.6015% ( 1) 00:30:33.348 30449.034 - 30650.683: 99.6394% ( 6) 00:30:33.348 30650.683 - 30852.332: 99.6900% ( 8) 00:30:33.348 30852.332 - 31053.982: 99.7406% ( 8) 00:30:33.348 31053.982 - 31255.631: 99.7912% ( 8) 00:30:33.348 31255.631 - 31457.280: 99.8419% ( 8) 00:30:33.348 31457.280 - 31658.929: 99.8925% ( 8) 00:30:33.348 31658.929 - 31860.578: 99.9431% ( 8) 00:30:33.348 31860.578 - 32062.228: 100.0000% ( 9) 00:30:33.348 00:30:33.348 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:33.348 ============================================================================== 00:30:33.348 Range in us Cumulative IO count 00:30:33.348 6301.538 - 6326.745: 0.0063% ( 1) 00:30:33.348 6351.951 - 6377.157: 0.0127% ( 1) 00:30:33.348 6377.157 - 6402.363: 0.0253% ( 2) 00:30:33.348 6402.363 - 6427.569: 0.0506% ( 4) 00:30:33.348 6427.569 - 6452.775: 0.0822% ( 5) 00:30:33.348 6452.775 - 6503.188: 0.1771% ( 15) 00:30:33.348 6503.188 - 6553.600: 0.2847% ( 17) 00:30:33.348 6553.600 - 6604.012: 0.6263% ( 54) 00:30:33.348 6604.012 - 6654.425: 1.0058% ( 60) 00:30:33.348 6654.425 - 6704.837: 1.6447% ( 101) 00:30:33.348 6704.837 - 6755.249: 2.5557% ( 144) 00:30:33.348 6755.249 - 6805.662: 3.9537% ( 221) 00:30:33.348 6805.662 - 6856.074: 5.7629% ( 286) 00:30:33.348 6856.074 - 6906.486: 8.4830% ( 430) 00:30:33.348 6906.486 - 6956.898: 11.8105% ( 526) 00:30:33.348 6956.898 - 7007.311: 16.1501% ( 686) 00:30:33.348 7007.311 - 7057.723: 21.6030% ( 862) 00:30:33.348 7057.723 - 7108.135: 26.5815% ( 787) 00:30:33.348 7108.135 - 7158.548: 32.0534% ( 865) 00:30:33.348 7158.548 - 7208.960: 36.8548% ( 759) 00:30:33.348 7208.960 - 7259.372: 41.7510% ( 774) 00:30:33.348 7259.372 - 7309.785: 45.6161% ( 611) 00:30:33.348 7309.785 - 7360.197: 48.7791% ( 500) 00:30:33.348 7360.197 - 7410.609: 51.9357% ( 499) 00:30:33.348 7410.609 - 7461.022: 53.9600% ( 320) 00:30:33.348 7461.022 - 7511.434: 55.5352% ( 249) 00:30:33.348 7511.434 - 7561.846: 57.0471% ( 239) 00:30:33.348 7561.846 - 7612.258: 58.2110% ( 184) 00:30:33.348 7612.258 - 7662.671: 58.9765% ( 121) 00:30:33.348 7662.671 - 7713.083: 59.7229% ( 118) 00:30:33.348 7713.083 - 7763.495: 60.3618% ( 101) 00:30:33.348 7763.495 - 7813.908: 60.8806% ( 82) 00:30:33.348 7813.908 - 7864.320: 61.5258% ( 102) 00:30:33.348 7864.320 - 7914.732: 62.2596% ( 116) 00:30:33.348 7914.732 - 7965.145: 62.8479% ( 93) 00:30:33.348 7965.145 - 8015.557: 63.3730% ( 83) 00:30:33.348 8015.557 - 8065.969: 64.1700% ( 126) 00:30:33.348 8065.969 - 8116.382: 64.8659% ( 110) 00:30:33.348 8116.382 - 8166.794: 65.4542% ( 93) 00:30:33.348 8166.794 - 8217.206: 66.0172% ( 89) 00:30:33.348 8217.206 - 8267.618: 66.6751% ( 104) 00:30:33.348 8267.618 - 8318.031: 67.3583% ( 108) 00:30:33.348 8318.031 - 8368.443: 67.9087% ( 87) 00:30:33.348 8368.443 - 8418.855: 68.4337% ( 83) 00:30:33.348 8418.855 - 8469.268: 68.9904% ( 88) 00:30:33.348 8469.268 - 8519.680: 69.4522% ( 73) 00:30:33.348 8519.680 - 8570.092: 69.9519% ( 79) 00:30:33.348 8570.092 - 8620.505: 70.5276% ( 91) 00:30:33.348 8620.505 - 8670.917: 71.4638% ( 148) 00:30:33.348 8670.917 - 8721.329: 72.3558% ( 141) 00:30:33.348 8721.329 - 8771.742: 73.1845% ( 131) 00:30:33.348 8771.742 - 8822.154: 74.0321% ( 134) 00:30:33.348 8822.154 - 8872.566: 74.8861% ( 135) 00:30:33.348 8872.566 - 8922.978: 75.6199% ( 116) 00:30:33.348 8922.978 - 8973.391: 76.4360% ( 129) 00:30:33.348 8973.391 - 9023.803: 77.2204% ( 124) 00:30:33.348 9023.803 - 9074.215: 77.9922% ( 122) 00:30:33.348 9074.215 - 9124.628: 78.8525% ( 136) 00:30:33.348 9124.628 - 9175.040: 79.8773% ( 162) 00:30:33.348 9175.040 - 9225.452: 80.8704% ( 157) 00:30:33.348 9225.452 - 9275.865: 81.8383% ( 153) 00:30:33.348 9275.865 - 9326.277: 82.8062% ( 153) 00:30:33.348 9326.277 - 9376.689: 84.0018% ( 189) 00:30:33.348 9376.689 - 9427.102: 85.0329% ( 163) 00:30:33.348 9427.102 - 9477.514: 86.0134% ( 155) 00:30:33.348 9477.514 - 9527.926: 87.1141% ( 174) 00:30:33.348 9527.926 - 9578.338: 88.1895% ( 170) 00:30:33.348 9578.338 - 9628.751: 89.3219% ( 179) 00:30:33.348 9628.751 - 9679.163: 90.5428% ( 193) 00:30:33.348 9679.163 - 9729.575: 91.6498% ( 175) 00:30:33.348 9729.575 - 9779.988: 92.3710% ( 114) 00:30:33.348 9779.988 - 9830.400: 93.1301% ( 120) 00:30:33.348 9830.400 - 9880.812: 93.9018% ( 122) 00:30:33.348 9880.812 - 9931.225: 94.4712% ( 90) 00:30:33.348 9931.225 - 9981.637: 95.0848% ( 97) 00:30:33.348 9981.637 - 10032.049: 95.5845% ( 79) 00:30:33.348 10032.049 - 10082.462: 95.9577% ( 59) 00:30:33.348 10082.462 - 10132.874: 96.2677% ( 49) 00:30:33.348 10132.874 - 10183.286: 96.5271% ( 41) 00:30:33.348 10183.286 - 10233.698: 96.7548% ( 36) 00:30:33.348 10233.698 - 10284.111: 96.9636% ( 33) 00:30:33.348 10284.111 - 10334.523: 97.1280% ( 26) 00:30:33.348 10334.523 - 10384.935: 97.2609% ( 21) 00:30:33.348 10384.935 - 10435.348: 97.4001% ( 22) 00:30:33.348 10435.348 - 10485.760: 97.4696% ( 11) 00:30:33.348 10485.760 - 10536.172: 97.5582% ( 14) 00:30:33.348 10536.172 - 10586.585: 97.6278% ( 11) 00:30:33.348 10586.585 - 10636.997: 97.7543% ( 20) 00:30:33.348 10636.997 - 10687.409: 97.8492% ( 15) 00:30:33.348 10687.409 - 10737.822: 97.9631% ( 18) 00:30:33.348 10737.822 - 10788.234: 98.0643% ( 16) 00:30:33.348 10788.234 - 10838.646: 98.1971% ( 21) 00:30:33.348 10838.646 - 10889.058: 98.2414% ( 7) 00:30:33.348 10889.058 - 10939.471: 98.2857% ( 7) 00:30:33.348 10939.471 - 10989.883: 98.3047% ( 3) 00:30:33.348 10989.883 - 11040.295: 98.3236% ( 3) 00:30:33.348 11040.295 - 11090.708: 98.3426% ( 3) 00:30:33.348 11090.708 - 11141.120: 98.3616% ( 3) 00:30:33.348 11141.120 - 11191.532: 98.3806% ( 3) 00:30:33.348 11947.717 - 11998.129: 98.3932% ( 2) 00:30:33.348 11998.129 - 12048.542: 98.4312% ( 6) 00:30:33.348 12048.542 - 12098.954: 98.4691% ( 6) 00:30:33.348 12098.954 - 12149.366: 98.5197% ( 8) 00:30:33.348 12149.366 - 12199.778: 98.5640% ( 7) 00:30:33.348 12199.778 - 12250.191: 98.6083% ( 7) 00:30:33.348 12250.191 - 12300.603: 98.6526% ( 7) 00:30:33.348 12300.603 - 12351.015: 98.6842% ( 5) 00:30:33.348 12351.015 - 12401.428: 98.7285% ( 7) 00:30:33.348 12401.428 - 12451.840: 98.7854% ( 9) 00:30:33.348 12451.840 - 12502.252: 98.8360% ( 8) 00:30:33.348 12502.252 - 12552.665: 98.8803% ( 7) 00:30:33.348 12552.665 - 12603.077: 98.9183% ( 6) 00:30:33.349 12603.077 - 12653.489: 98.9562% ( 6) 00:30:33.349 12653.489 - 12703.902: 98.9879% ( 5) 00:30:33.349 12703.902 - 12754.314: 99.0258% ( 6) 00:30:33.349 12754.314 - 12804.726: 99.0638% ( 6) 00:30:33.349 12804.726 - 12855.138: 99.1017% ( 6) 00:30:33.349 12855.138 - 12905.551: 99.1397% ( 6) 00:30:33.349 12905.551 - 13006.375: 99.1840% ( 7) 00:30:33.349 13006.375 - 13107.200: 99.1903% ( 1) 00:30:33.349 23996.258 - 24097.083: 99.1966% ( 1) 00:30:33.349 24097.083 - 24197.908: 99.2156% ( 3) 00:30:33.349 24197.908 - 24298.732: 99.2409% ( 4) 00:30:33.349 24298.732 - 24399.557: 99.2662% ( 4) 00:30:33.349 24399.557 - 24500.382: 99.2852% ( 3) 00:30:33.349 24500.382 - 24601.206: 99.3105% ( 4) 00:30:33.349 24601.206 - 24702.031: 99.3358% ( 4) 00:30:33.349 24702.031 - 24802.855: 99.3548% ( 3) 00:30:33.349 24802.855 - 24903.680: 99.3737% ( 3) 00:30:33.349 24903.680 - 25004.505: 99.3990% ( 4) 00:30:33.349 25004.505 - 25105.329: 99.4180% ( 3) 00:30:33.349 25105.329 - 25206.154: 99.4433% ( 4) 00:30:33.349 25206.154 - 25306.978: 99.4623% ( 3) 00:30:33.349 25306.978 - 25407.803: 99.4813% ( 3) 00:30:33.349 25407.803 - 25508.628: 99.5066% ( 4) 00:30:33.349 25508.628 - 25609.452: 99.5256% ( 3) 00:30:33.349 25609.452 - 25710.277: 99.5509% ( 4) 00:30:33.349 25710.277 - 25811.102: 99.5762% ( 4) 00:30:33.349 25811.102 - 26012.751: 99.5951% ( 3) 00:30:33.349 29037.489 - 29239.138: 99.6141% ( 3) 00:30:33.349 29239.138 - 29440.788: 99.6647% ( 8) 00:30:33.349 29440.788 - 29642.437: 99.7153% ( 8) 00:30:33.349 29642.437 - 29844.086: 99.7659% ( 8) 00:30:33.349 29844.086 - 30045.735: 99.8165% ( 8) 00:30:33.349 30045.735 - 30247.385: 99.8672% ( 8) 00:30:33.349 30247.385 - 30449.034: 99.9178% ( 8) 00:30:33.349 30449.034 - 30650.683: 99.9684% ( 8) 00:30:33.349 30650.683 - 30852.332: 100.0000% ( 5) 00:30:33.349 00:30:33.349 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:33.349 ============================================================================== 00:30:33.349 Range in us Cumulative IO count 00:30:33.349 6200.714 - 6225.920: 0.0063% ( 1) 00:30:33.349 6377.157 - 6402.363: 0.0316% ( 4) 00:30:33.349 6402.363 - 6427.569: 0.0759% ( 7) 00:30:33.349 6427.569 - 6452.775: 0.1139% ( 6) 00:30:33.349 6452.775 - 6503.188: 0.2214% ( 17) 00:30:33.349 6503.188 - 6553.600: 0.3922% ( 27) 00:30:33.349 6553.600 - 6604.012: 0.6895% ( 47) 00:30:33.349 6604.012 - 6654.425: 1.1007% ( 65) 00:30:33.349 6654.425 - 6704.837: 1.6637% ( 89) 00:30:33.349 6704.837 - 6755.249: 2.4165% ( 119) 00:30:33.349 6755.249 - 6805.662: 3.6564% ( 196) 00:30:33.349 6805.662 - 6856.074: 5.3201% ( 263) 00:30:33.349 6856.074 - 6906.486: 7.7113% ( 378) 00:30:33.349 6906.486 - 6956.898: 11.5132% ( 601) 00:30:33.349 6956.898 - 7007.311: 15.8148% ( 680) 00:30:33.349 7007.311 - 7057.723: 20.5339% ( 746) 00:30:33.349 7057.723 - 7108.135: 25.6136% ( 803) 00:30:33.349 7108.135 - 7158.548: 31.1488% ( 875) 00:30:33.349 7158.548 - 7208.960: 36.9054% ( 910) 00:30:33.349 7208.960 - 7259.372: 41.5423% ( 733) 00:30:33.349 7259.372 - 7309.785: 45.3378% ( 600) 00:30:33.349 7309.785 - 7360.197: 48.7728% ( 543) 00:30:33.349 7360.197 - 7410.609: 51.5435% ( 438) 00:30:33.349 7410.609 - 7461.022: 53.6880% ( 339) 00:30:33.349 7461.022 - 7511.434: 55.3327% ( 260) 00:30:33.349 7511.434 - 7561.846: 56.5979% ( 200) 00:30:33.349 7561.846 - 7612.258: 57.7682% ( 185) 00:30:33.349 7612.258 - 7662.671: 58.7614% ( 157) 00:30:33.349 7662.671 - 7713.083: 59.7546% ( 157) 00:30:33.349 7713.083 - 7763.495: 60.8489% ( 173) 00:30:33.349 7763.495 - 7813.908: 61.6017% ( 119) 00:30:33.349 7813.908 - 7864.320: 62.0319% ( 68) 00:30:33.349 7864.320 - 7914.732: 62.6518% ( 98) 00:30:33.349 7914.732 - 7965.145: 63.3287% ( 107) 00:30:33.349 7965.145 - 8015.557: 63.8790% ( 87) 00:30:33.349 8015.557 - 8065.969: 64.4104% ( 84) 00:30:33.349 8065.969 - 8116.382: 64.9038% ( 78) 00:30:33.349 8116.382 - 8166.794: 65.4162% ( 81) 00:30:33.349 8166.794 - 8217.206: 65.9286% ( 81) 00:30:33.349 8217.206 - 8267.618: 66.3082% ( 60) 00:30:33.349 8267.618 - 8318.031: 66.8079% ( 79) 00:30:33.349 8318.031 - 8368.443: 67.2381% ( 68) 00:30:33.349 8368.443 - 8418.855: 67.9529% ( 113) 00:30:33.349 8418.855 - 8469.268: 68.7880% ( 132) 00:30:33.349 8469.268 - 8519.680: 69.3446% ( 88) 00:30:33.349 8519.680 - 8570.092: 70.0911% ( 118) 00:30:33.349 8570.092 - 8620.505: 70.7300% ( 101) 00:30:33.349 8620.505 - 8670.917: 71.5271% ( 126) 00:30:33.349 8670.917 - 8721.329: 72.4001% ( 138) 00:30:33.349 8721.329 - 8771.742: 73.3742% ( 154) 00:30:33.349 8771.742 - 8822.154: 74.1840% ( 128) 00:30:33.349 8822.154 - 8872.566: 75.0253% ( 133) 00:30:33.349 8872.566 - 8922.978: 75.7338% ( 112) 00:30:33.349 8922.978 - 8973.391: 76.5056% ( 122) 00:30:33.349 8973.391 - 9023.803: 77.4228% ( 145) 00:30:33.349 9023.803 - 9074.215: 78.2199% ( 126) 00:30:33.349 9074.215 - 9124.628: 79.0043% ( 124) 00:30:33.349 9124.628 - 9175.040: 79.8773% ( 138) 00:30:33.349 9175.040 - 9225.452: 80.9211% ( 165) 00:30:33.349 9225.452 - 9275.865: 81.9648% ( 165) 00:30:33.349 9275.865 - 9326.277: 83.0402% ( 170) 00:30:33.349 9326.277 - 9376.689: 84.0144% ( 154) 00:30:33.349 9376.689 - 9427.102: 85.0076% ( 157) 00:30:33.349 9427.102 - 9477.514: 85.8806% ( 138) 00:30:33.349 9477.514 - 9527.926: 86.9054% ( 162) 00:30:33.349 9527.926 - 9578.338: 87.8669% ( 152) 00:30:33.349 9578.338 - 9628.751: 88.8854% ( 161) 00:30:33.349 9628.751 - 9679.163: 90.1379% ( 198) 00:30:33.349 9679.163 - 9729.575: 91.3525% ( 192) 00:30:33.349 9729.575 - 9779.988: 92.3583% ( 159) 00:30:33.349 9779.988 - 9830.400: 93.2249% ( 137) 00:30:33.349 9830.400 - 9880.812: 94.0410% ( 129) 00:30:33.349 9880.812 - 9931.225: 94.5344% ( 78) 00:30:33.349 9931.225 - 9981.637: 95.0721% ( 85) 00:30:33.349 9981.637 - 10032.049: 95.4960% ( 67) 00:30:33.349 10032.049 - 10082.462: 95.8186% ( 51) 00:30:33.349 10082.462 - 10132.874: 96.0843% ( 42) 00:30:33.349 10132.874 - 10183.286: 96.2804% ( 31) 00:30:33.349 10183.286 - 10233.698: 96.4006% ( 19) 00:30:33.349 10233.698 - 10284.111: 96.5840% ( 29) 00:30:33.349 10284.111 - 10334.523: 96.8117% ( 36) 00:30:33.349 10334.523 - 10384.935: 97.0268% ( 34) 00:30:33.349 10384.935 - 10435.348: 97.2672% ( 38) 00:30:33.349 10435.348 - 10485.760: 97.4823% ( 34) 00:30:33.349 10485.760 - 10536.172: 97.7543% ( 43) 00:30:33.349 10536.172 - 10586.585: 97.9124% ( 25) 00:30:33.349 10586.585 - 10636.997: 98.0010% ( 14) 00:30:33.349 10636.997 - 10687.409: 98.0769% ( 12) 00:30:33.349 10687.409 - 10737.822: 98.1402% ( 10) 00:30:33.349 10737.822 - 10788.234: 98.2034% ( 10) 00:30:33.349 10788.234 - 10838.646: 98.2730% ( 11) 00:30:33.349 10838.646 - 10889.058: 98.3047% ( 5) 00:30:33.349 10889.058 - 10939.471: 98.3426% ( 6) 00:30:33.349 10939.471 - 10989.883: 98.3616% ( 3) 00:30:33.349 10989.883 - 11040.295: 98.3742% ( 2) 00:30:33.349 11040.295 - 11090.708: 98.3806% ( 1) 00:30:33.349 11443.594 - 11494.006: 98.3869% ( 1) 00:30:33.349 11494.006 - 11544.418: 98.4122% ( 4) 00:30:33.349 11544.418 - 11594.831: 98.4312% ( 3) 00:30:33.349 11594.831 - 11645.243: 98.4502% ( 3) 00:30:33.349 11645.243 - 11695.655: 98.4628% ( 2) 00:30:33.349 11695.655 - 11746.068: 98.4881% ( 4) 00:30:33.349 11746.068 - 11796.480: 98.5071% ( 3) 00:30:33.349 11796.480 - 11846.892: 98.5261% ( 3) 00:30:33.349 11846.892 - 11897.305: 98.5450% ( 3) 00:30:33.349 11897.305 - 11947.717: 98.5640% ( 3) 00:30:33.349 11947.717 - 11998.129: 98.5893% ( 4) 00:30:33.349 11998.129 - 12048.542: 98.6083% ( 3) 00:30:33.349 12048.542 - 12098.954: 98.6273% ( 3) 00:30:33.349 12098.954 - 12149.366: 98.6463% ( 3) 00:30:33.349 12149.366 - 12199.778: 98.6652% ( 3) 00:30:33.349 12199.778 - 12250.191: 98.6905% ( 4) 00:30:33.349 12250.191 - 12300.603: 98.7095% ( 3) 00:30:33.349 12300.603 - 12351.015: 98.7601% ( 8) 00:30:33.349 12351.015 - 12401.428: 98.8234% ( 10) 00:30:33.349 12401.428 - 12451.840: 98.8866% ( 10) 00:30:33.349 12451.840 - 12502.252: 98.9436% ( 9) 00:30:33.349 12502.252 - 12552.665: 98.9815% ( 6) 00:30:33.349 12552.665 - 12603.077: 99.0132% ( 5) 00:30:33.349 12603.077 - 12653.489: 99.0321% ( 3) 00:30:33.349 12653.489 - 12703.902: 99.0511% ( 3) 00:30:33.349 12703.902 - 12754.314: 99.0701% ( 3) 00:30:33.349 12754.314 - 12804.726: 99.0827% ( 2) 00:30:33.349 12804.726 - 12855.138: 99.1017% ( 3) 00:30:33.349 12855.138 - 12905.551: 99.1207% ( 3) 00:30:33.349 12905.551 - 13006.375: 99.1587% ( 6) 00:30:33.349 13006.375 - 13107.200: 99.1903% ( 5) 00:30:33.349 22383.065 - 22483.889: 99.1966% ( 1) 00:30:33.349 22483.889 - 22584.714: 99.2219% ( 4) 00:30:33.349 22584.714 - 22685.538: 99.2472% ( 4) 00:30:33.349 22685.538 - 22786.363: 99.2662% ( 3) 00:30:33.349 22786.363 - 22887.188: 99.2915% ( 4) 00:30:33.349 22887.188 - 22988.012: 99.3168% ( 4) 00:30:33.349 22988.012 - 23088.837: 99.3484% ( 5) 00:30:33.349 23088.837 - 23189.662: 99.3737% ( 4) 00:30:33.349 23189.662 - 23290.486: 99.3927% ( 3) 00:30:33.349 23290.486 - 23391.311: 99.4180% ( 4) 00:30:33.349 23391.311 - 23492.135: 99.4433% ( 4) 00:30:33.349 23492.135 - 23592.960: 99.4686% ( 4) 00:30:33.349 23592.960 - 23693.785: 99.4939% ( 4) 00:30:33.349 23693.785 - 23794.609: 99.5256% ( 5) 00:30:33.349 23794.609 - 23895.434: 99.5509% ( 4) 00:30:33.349 23895.434 - 23996.258: 99.5762% ( 4) 00:30:33.349 23996.258 - 24097.083: 99.5951% ( 3) 00:30:33.349 27222.646 - 27424.295: 99.6141% ( 3) 00:30:33.349 27424.295 - 27625.945: 99.6521% ( 6) 00:30:33.349 27625.945 - 27827.594: 99.7027% ( 8) 00:30:33.349 27827.594 - 28029.243: 99.7533% ( 8) 00:30:33.349 28029.243 - 28230.892: 99.8102% ( 9) 00:30:33.349 28230.892 - 28432.542: 99.8608% ( 8) 00:30:33.350 28432.542 - 28634.191: 99.9114% ( 8) 00:30:33.350 28634.191 - 28835.840: 99.9620% ( 8) 00:30:33.350 28835.840 - 29037.489: 100.0000% ( 6) 00:30:33.350 00:30:33.350 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:33.350 ============================================================================== 00:30:33.350 Range in us Cumulative IO count 00:30:33.350 6351.951 - 6377.157: 0.0063% ( 1) 00:30:33.350 6427.569 - 6452.775: 0.0127% ( 1) 00:30:33.350 6452.775 - 6503.188: 0.0696% ( 9) 00:30:33.350 6503.188 - 6553.600: 0.1898% ( 19) 00:30:33.350 6553.600 - 6604.012: 0.3606% ( 27) 00:30:33.350 6604.012 - 6654.425: 0.6895% ( 52) 00:30:33.350 6654.425 - 6704.837: 1.4297% ( 117) 00:30:33.350 6704.837 - 6755.249: 2.4987% ( 169) 00:30:33.350 6755.249 - 6805.662: 3.9347% ( 227) 00:30:33.350 6805.662 - 6856.074: 5.9843% ( 324) 00:30:33.350 6856.074 - 6906.486: 8.7424% ( 436) 00:30:33.350 6906.486 - 6956.898: 12.6392% ( 616) 00:30:33.350 6956.898 - 7007.311: 16.5043% ( 611) 00:30:33.350 7007.311 - 7057.723: 20.6035% ( 648) 00:30:33.350 7057.723 - 7108.135: 25.7212% ( 809) 00:30:33.350 7108.135 - 7158.548: 31.2373% ( 872) 00:30:33.350 7158.548 - 7208.960: 36.4372% ( 822) 00:30:33.350 7208.960 - 7259.372: 40.5870% ( 656) 00:30:33.350 7259.372 - 7309.785: 44.9013% ( 682) 00:30:33.350 7309.785 - 7360.197: 48.2667% ( 532) 00:30:33.350 7360.197 - 7410.609: 50.8287% ( 405) 00:30:33.350 7410.609 - 7461.022: 52.8720% ( 323) 00:30:33.350 7461.022 - 7511.434: 54.4281% ( 246) 00:30:33.350 7511.434 - 7561.846: 55.9906% ( 247) 00:30:33.350 7561.846 - 7612.258: 57.1926% ( 190) 00:30:33.350 7612.258 - 7662.671: 58.2300% ( 164) 00:30:33.350 7662.671 - 7713.083: 59.2485% ( 161) 00:30:33.350 7713.083 - 7763.495: 60.3745% ( 178) 00:30:33.350 7763.495 - 7813.908: 61.2475% ( 138) 00:30:33.350 7813.908 - 7864.320: 62.3672% ( 177) 00:30:33.350 7864.320 - 7914.732: 63.2401% ( 138) 00:30:33.350 7914.732 - 7965.145: 63.8031% ( 89) 00:30:33.350 7965.145 - 8015.557: 64.3408% ( 85) 00:30:33.350 8015.557 - 8065.969: 64.8975% ( 88) 00:30:33.350 8065.969 - 8116.382: 65.5491% ( 103) 00:30:33.350 8116.382 - 8166.794: 65.9350% ( 61) 00:30:33.350 8166.794 - 8217.206: 66.4157% ( 76) 00:30:33.350 8217.206 - 8267.618: 66.6878% ( 43) 00:30:33.350 8267.618 - 8318.031: 67.1812% ( 78) 00:30:33.350 8318.031 - 8368.443: 67.6556% ( 75) 00:30:33.350 8368.443 - 8418.855: 68.0352% ( 60) 00:30:33.350 8418.855 - 8469.268: 68.5033% ( 74) 00:30:33.350 8469.268 - 8519.680: 69.0473% ( 86) 00:30:33.350 8519.680 - 8570.092: 69.7811% ( 116) 00:30:33.350 8570.092 - 8620.505: 70.5655% ( 124) 00:30:33.350 8620.505 - 8670.917: 71.4512% ( 140) 00:30:33.350 8670.917 - 8721.329: 72.6784% ( 194) 00:30:33.350 8721.329 - 8771.742: 73.7411% ( 168) 00:30:33.350 8771.742 - 8822.154: 74.5382% ( 126) 00:30:33.350 8822.154 - 8872.566: 75.2341% ( 110) 00:30:33.350 8872.566 - 8922.978: 75.9489% ( 113) 00:30:33.350 8922.978 - 8973.391: 76.7206% ( 122) 00:30:33.350 8973.391 - 9023.803: 77.4861% ( 121) 00:30:33.350 9023.803 - 9074.215: 78.3021% ( 129) 00:30:33.350 9074.215 - 9124.628: 79.0865% ( 124) 00:30:33.350 9124.628 - 9175.040: 80.0038% ( 145) 00:30:33.350 9175.040 - 9225.452: 80.9780% ( 154) 00:30:33.350 9225.452 - 9275.865: 82.0534% ( 170) 00:30:33.350 9275.865 - 9326.277: 83.0213% ( 153) 00:30:33.350 9326.277 - 9376.689: 83.9512% ( 147) 00:30:33.350 9376.689 - 9427.102: 84.9823% ( 163) 00:30:33.350 9427.102 - 9477.514: 85.9628% ( 155) 00:30:33.350 9477.514 - 9527.926: 86.9623% ( 158) 00:30:33.350 9527.926 - 9578.338: 88.0187% ( 167) 00:30:33.350 9578.338 - 9628.751: 89.0435% ( 162) 00:30:33.350 9628.751 - 9679.163: 90.0620% ( 161) 00:30:33.350 9679.163 - 9729.575: 91.1690% ( 175) 00:30:33.350 9729.575 - 9779.988: 92.0926% ( 146) 00:30:33.350 9779.988 - 9830.400: 92.9719% ( 139) 00:30:33.350 9830.400 - 9880.812: 93.5982% ( 99) 00:30:33.350 9880.812 - 9931.225: 94.1232% ( 83) 00:30:33.350 9931.225 - 9981.637: 94.5913% ( 74) 00:30:33.350 9981.637 - 10032.049: 94.9962% ( 64) 00:30:33.350 10032.049 - 10082.462: 95.3821% ( 61) 00:30:33.350 10082.462 - 10132.874: 95.6414% ( 41) 00:30:33.350 10132.874 - 10183.286: 95.9388% ( 47) 00:30:33.350 10183.286 - 10233.698: 96.2740% ( 53) 00:30:33.350 10233.698 - 10284.111: 96.5777% ( 48) 00:30:33.350 10284.111 - 10334.523: 96.8307% ( 40) 00:30:33.350 10334.523 - 10384.935: 97.0205% ( 30) 00:30:33.350 10384.935 - 10435.348: 97.2293% ( 33) 00:30:33.350 10435.348 - 10485.760: 97.3684% ( 22) 00:30:33.350 10485.760 - 10536.172: 97.5266% ( 25) 00:30:33.350 10536.172 - 10586.585: 97.7227% ( 31) 00:30:33.350 10586.585 - 10636.997: 97.8935% ( 27) 00:30:33.350 10636.997 - 10687.409: 98.0263% ( 21) 00:30:33.350 10687.409 - 10737.822: 98.1275% ( 16) 00:30:33.350 10737.822 - 10788.234: 98.2161% ( 14) 00:30:33.350 10788.234 - 10838.646: 98.3047% ( 14) 00:30:33.350 10838.646 - 10889.058: 98.3363% ( 5) 00:30:33.350 10889.058 - 10939.471: 98.3616% ( 4) 00:30:33.350 10939.471 - 10989.883: 98.3806% ( 3) 00:30:33.350 11090.708 - 11141.120: 98.4122% ( 5) 00:30:33.350 11141.120 - 11191.532: 98.4438% ( 5) 00:30:33.350 11191.532 - 11241.945: 98.4628% ( 3) 00:30:33.350 11241.945 - 11292.357: 98.4818% ( 3) 00:30:33.350 11292.357 - 11342.769: 98.5071% ( 4) 00:30:33.350 11342.769 - 11393.182: 98.5261% ( 3) 00:30:33.350 11393.182 - 11443.594: 98.5514% ( 4) 00:30:33.350 11443.594 - 11494.006: 98.5767% ( 4) 00:30:33.350 11494.006 - 11544.418: 98.5893% ( 2) 00:30:33.350 11544.418 - 11594.831: 98.6083% ( 3) 00:30:33.350 11594.831 - 11645.243: 98.6273% ( 3) 00:30:33.350 11645.243 - 11695.655: 98.6526% ( 4) 00:30:33.350 11695.655 - 11746.068: 98.6716% ( 3) 00:30:33.350 11746.068 - 11796.480: 98.6905% ( 3) 00:30:33.350 11796.480 - 11846.892: 98.7095% ( 3) 00:30:33.350 11846.892 - 11897.305: 98.7285% ( 3) 00:30:33.350 11897.305 - 11947.717: 98.7538% ( 4) 00:30:33.350 11947.717 - 11998.129: 98.7728% ( 3) 00:30:33.350 11998.129 - 12048.542: 98.7854% ( 2) 00:30:33.350 12653.489 - 12703.902: 98.7918% ( 1) 00:30:33.350 12703.902 - 12754.314: 98.7981% ( 1) 00:30:33.350 12754.314 - 12804.726: 98.8360% ( 6) 00:30:33.350 12804.726 - 12855.138: 98.8803% ( 7) 00:30:33.350 12855.138 - 12905.551: 98.9119% ( 5) 00:30:33.350 12905.551 - 13006.375: 98.9879% ( 12) 00:30:33.350 13006.375 - 13107.200: 99.0448% ( 9) 00:30:33.350 13107.200 - 13208.025: 99.0827% ( 6) 00:30:33.350 13208.025 - 13308.849: 99.1144% ( 5) 00:30:33.350 13308.849 - 13409.674: 99.1523% ( 6) 00:30:33.350 13409.674 - 13510.498: 99.1903% ( 6) 00:30:33.350 20669.046 - 20769.871: 99.2093% ( 3) 00:30:33.350 20769.871 - 20870.695: 99.2409% ( 5) 00:30:33.350 20870.695 - 20971.520: 99.2662% ( 4) 00:30:33.350 20971.520 - 21072.345: 99.2915% ( 4) 00:30:33.350 21072.345 - 21173.169: 99.3168% ( 4) 00:30:33.350 21173.169 - 21273.994: 99.3421% ( 4) 00:30:33.350 21273.994 - 21374.818: 99.3674% ( 4) 00:30:33.350 21374.818 - 21475.643: 99.3927% ( 4) 00:30:33.350 21475.643 - 21576.468: 99.4180% ( 4) 00:30:33.350 21576.468 - 21677.292: 99.4433% ( 4) 00:30:33.350 21677.292 - 21778.117: 99.4623% ( 3) 00:30:33.350 21778.117 - 21878.942: 99.4876% ( 4) 00:30:33.350 21878.942 - 21979.766: 99.5129% ( 4) 00:30:33.350 21979.766 - 22080.591: 99.5382% ( 4) 00:30:33.350 22080.591 - 22181.415: 99.5635% ( 4) 00:30:33.350 22181.415 - 22282.240: 99.5888% ( 4) 00:30:33.350 22282.240 - 22383.065: 99.5951% ( 1) 00:30:33.350 25609.452 - 25710.277: 99.6015% ( 1) 00:30:33.350 25710.277 - 25811.102: 99.6268% ( 4) 00:30:33.350 25811.102 - 26012.751: 99.6774% ( 8) 00:30:33.350 26012.751 - 26214.400: 99.7217% ( 7) 00:30:33.350 26214.400 - 26416.049: 99.7723% ( 8) 00:30:33.350 26416.049 - 26617.698: 99.8292% ( 9) 00:30:33.350 26617.698 - 26819.348: 99.8798% ( 8) 00:30:33.350 26819.348 - 27020.997: 99.9367% ( 9) 00:30:33.350 27020.997 - 27222.646: 99.9873% ( 8) 00:30:33.350 27222.646 - 27424.295: 100.0000% ( 2) 00:30:33.350 00:30:33.350 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:33.350 ============================================================================== 00:30:33.350 Range in us Cumulative IO count 00:30:33.350 6301.538 - 6326.745: 0.0063% ( 1) 00:30:33.350 6351.951 - 6377.157: 0.0126% ( 1) 00:30:33.351 6402.363 - 6427.569: 0.0315% ( 3) 00:30:33.351 6427.569 - 6452.775: 0.0378% ( 1) 00:30:33.351 6452.775 - 6503.188: 0.0756% ( 6) 00:30:33.351 6503.188 - 6553.600: 0.1449% ( 11) 00:30:33.351 6553.600 - 6604.012: 0.3213% ( 28) 00:30:33.351 6604.012 - 6654.425: 0.6363% ( 50) 00:30:33.351 6654.425 - 6704.837: 1.0522% ( 66) 00:30:33.351 6704.837 - 6755.249: 2.2114% ( 184) 00:30:33.351 6755.249 - 6805.662: 3.8369% ( 258) 00:30:33.351 6805.662 - 6856.074: 5.4372% ( 254) 00:30:33.351 6856.074 - 6906.486: 8.1842% ( 436) 00:30:33.351 6906.486 - 6956.898: 12.5756% ( 697) 00:30:33.351 6956.898 - 7007.311: 16.9103% ( 688) 00:30:33.351 7007.311 - 7057.723: 21.6545% ( 753) 00:30:33.351 7057.723 - 7108.135: 26.3609% ( 747) 00:30:33.351 7108.135 - 7158.548: 32.0312% ( 900) 00:30:33.351 7158.548 - 7208.960: 36.9897% ( 787) 00:30:33.351 7208.960 - 7259.372: 40.6376% ( 579) 00:30:33.351 7259.372 - 7309.785: 44.7518% ( 653) 00:30:33.351 7309.785 - 7360.197: 48.1981% ( 547) 00:30:33.351 7360.197 - 7410.609: 50.5607% ( 375) 00:30:33.351 7410.609 - 7461.022: 52.2807% ( 273) 00:30:33.351 7461.022 - 7511.434: 54.4670% ( 347) 00:30:33.351 7511.434 - 7561.846: 55.6011% ( 180) 00:30:33.351 7561.846 - 7612.258: 57.0123% ( 224) 00:30:33.351 7612.258 - 7662.671: 57.9700% ( 152) 00:30:33.351 7662.671 - 7713.083: 59.2616% ( 205) 00:30:33.351 7713.083 - 7763.495: 59.9294% ( 106) 00:30:33.351 7763.495 - 7813.908: 61.1454% ( 193) 00:30:33.351 7813.908 - 7864.320: 61.7755% ( 100) 00:30:33.351 7864.320 - 7914.732: 62.4496% ( 107) 00:30:33.351 7914.732 - 7965.145: 63.4262% ( 155) 00:30:33.351 7965.145 - 8015.557: 64.0499% ( 99) 00:30:33.351 8015.557 - 8065.969: 64.6610% ( 97) 00:30:33.351 8065.969 - 8116.382: 65.2911% ( 100) 00:30:33.351 8116.382 - 8166.794: 65.8455% ( 88) 00:30:33.351 8166.794 - 8217.206: 66.4189% ( 91) 00:30:33.351 8217.206 - 8267.618: 66.7780% ( 57) 00:30:33.351 8267.618 - 8318.031: 67.1371% ( 57) 00:30:33.351 8318.031 - 8368.443: 67.4710% ( 53) 00:30:33.351 8368.443 - 8418.855: 68.2334% ( 121) 00:30:33.351 8418.855 - 8469.268: 68.6177% ( 61) 00:30:33.351 8469.268 - 8519.680: 69.0272% ( 65) 00:30:33.351 8519.680 - 8570.092: 69.4871% ( 73) 00:30:33.351 8570.092 - 8620.505: 70.1802% ( 110) 00:30:33.351 8620.505 - 8670.917: 71.0748% ( 142) 00:30:33.351 8670.917 - 8721.329: 72.0262% ( 151) 00:30:33.351 8721.329 - 8771.742: 73.1225% ( 174) 00:30:33.351 8771.742 - 8822.154: 74.0801% ( 152) 00:30:33.351 8822.154 - 8872.566: 74.9370% ( 136) 00:30:33.351 8872.566 - 8922.978: 75.9262% ( 157) 00:30:33.351 8922.978 - 8973.391: 76.6948% ( 122) 00:30:33.351 8973.391 - 9023.803: 77.4383% ( 118) 00:30:33.351 9023.803 - 9074.215: 78.2195% ( 124) 00:30:33.351 9074.215 - 9124.628: 79.0827% ( 137) 00:30:33.351 9124.628 - 9175.040: 80.0340% ( 151) 00:30:33.351 9175.040 - 9225.452: 80.9791% ( 150) 00:30:33.351 9225.452 - 9275.865: 81.9493% ( 154) 00:30:33.351 9275.865 - 9326.277: 83.0267% ( 171) 00:30:33.351 9326.277 - 9376.689: 83.9340% ( 144) 00:30:33.351 9376.689 - 9427.102: 84.7971% ( 137) 00:30:33.351 9427.102 - 9477.514: 85.6918% ( 142) 00:30:33.351 9477.514 - 9527.926: 86.6053% ( 145) 00:30:33.351 9527.926 - 9578.338: 87.5315% ( 147) 00:30:33.351 9578.338 - 9628.751: 88.6467% ( 177) 00:30:33.351 9628.751 - 9679.163: 89.7933% ( 182) 00:30:33.351 9679.163 - 9729.575: 90.8959% ( 175) 00:30:33.351 9729.575 - 9779.988: 91.7213% ( 131) 00:30:33.351 9779.988 - 9830.400: 92.8301% ( 176) 00:30:33.351 9830.400 - 9880.812: 93.5799% ( 119) 00:30:33.351 9880.812 - 9931.225: 94.2540% ( 107) 00:30:33.351 9931.225 - 9981.637: 94.7392% ( 77) 00:30:33.351 9981.637 - 10032.049: 95.1865% ( 71) 00:30:33.351 10032.049 - 10082.462: 95.5897% ( 64) 00:30:33.351 10082.462 - 10132.874: 96.0307% ( 70) 00:30:33.351 10132.874 - 10183.286: 96.4025% ( 59) 00:30:33.351 10183.286 - 10233.698: 96.6293% ( 36) 00:30:33.351 10233.698 - 10284.111: 96.7805% ( 24) 00:30:33.351 10284.111 - 10334.523: 96.9254% ( 23) 00:30:33.351 10334.523 - 10384.935: 97.0703% ( 23) 00:30:33.351 10384.935 - 10435.348: 97.2089% ( 22) 00:30:33.351 10435.348 - 10485.760: 97.3538% ( 23) 00:30:33.351 10485.760 - 10536.172: 97.4798% ( 20) 00:30:33.351 10536.172 - 10586.585: 97.6184% ( 22) 00:30:33.351 10586.585 - 10636.997: 97.8453% ( 36) 00:30:33.351 10636.997 - 10687.409: 97.9650% ( 19) 00:30:33.351 10687.409 - 10737.822: 98.0658% ( 16) 00:30:33.351 10737.822 - 10788.234: 98.1603% ( 15) 00:30:33.351 10788.234 - 10838.646: 98.2422% ( 13) 00:30:33.351 10838.646 - 10889.058: 98.3241% ( 13) 00:30:33.351 10889.058 - 10939.471: 98.3619% ( 6) 00:30:33.351 10939.471 - 10989.883: 98.4123% ( 8) 00:30:33.351 10989.883 - 11040.295: 98.4501% ( 6) 00:30:33.351 11040.295 - 11090.708: 98.4753% ( 4) 00:30:33.351 11090.708 - 11141.120: 98.5194% ( 7) 00:30:33.351 11141.120 - 11191.532: 98.5572% ( 6) 00:30:33.351 11191.532 - 11241.945: 98.6013% ( 7) 00:30:33.351 11241.945 - 11292.357: 98.6391% ( 6) 00:30:33.351 11292.357 - 11342.769: 98.6643% ( 4) 00:30:33.351 11342.769 - 11393.182: 98.6832% ( 3) 00:30:33.351 11393.182 - 11443.594: 98.6958% ( 2) 00:30:33.351 11443.594 - 11494.006: 98.7147% ( 3) 00:30:33.351 11494.006 - 11544.418: 98.7336% ( 3) 00:30:33.351 11544.418 - 11594.831: 98.7525% ( 3) 00:30:33.351 11594.831 - 11645.243: 98.7714% ( 3) 00:30:33.351 11645.243 - 11695.655: 98.7903% ( 3) 00:30:33.351 12754.314 - 12804.726: 98.7966% ( 1) 00:30:33.351 12855.138 - 12905.551: 98.8155% ( 3) 00:30:33.351 12905.551 - 13006.375: 98.8848% ( 11) 00:30:33.351 13006.375 - 13107.200: 98.9667% ( 13) 00:30:33.351 13107.200 - 13208.025: 99.0108% ( 7) 00:30:33.351 13208.025 - 13308.849: 99.0486% ( 6) 00:30:33.351 13308.849 - 13409.674: 99.0801% ( 5) 00:30:33.351 13409.674 - 13510.498: 99.1242% ( 7) 00:30:33.351 13510.498 - 13611.323: 99.1683% ( 7) 00:30:33.351 13611.323 - 13712.148: 99.1935% ( 4) 00:30:33.351 14317.095 - 14417.920: 99.2061% ( 2) 00:30:33.351 14417.920 - 14518.745: 99.2314% ( 4) 00:30:33.351 14518.745 - 14619.569: 99.2566% ( 4) 00:30:33.351 14619.569 - 14720.394: 99.2818% ( 4) 00:30:33.351 14720.394 - 14821.218: 99.3070% ( 4) 00:30:33.351 14821.218 - 14922.043: 99.3385% ( 5) 00:30:33.351 14922.043 - 15022.868: 99.3637% ( 4) 00:30:33.351 15022.868 - 15123.692: 99.3889% ( 4) 00:30:33.351 15123.692 - 15224.517: 99.4141% ( 4) 00:30:33.351 15224.517 - 15325.342: 99.4393% ( 4) 00:30:33.351 15325.342 - 15426.166: 99.4708% ( 5) 00:30:33.351 15426.166 - 15526.991: 99.4897% ( 3) 00:30:33.351 15526.991 - 15627.815: 99.5149% ( 4) 00:30:33.351 15627.815 - 15728.640: 99.5401% ( 4) 00:30:33.351 15728.640 - 15829.465: 99.5716% ( 5) 00:30:33.351 15829.465 - 15930.289: 99.5968% ( 4) 00:30:33.351 19963.274 - 20064.098: 99.6031% ( 1) 00:30:33.351 20064.098 - 20164.923: 99.6220% ( 3) 00:30:33.351 20164.923 - 20265.748: 99.6472% ( 4) 00:30:33.351 20265.748 - 20366.572: 99.6724% ( 4) 00:30:33.351 20366.572 - 20467.397: 99.6976% ( 4) 00:30:33.351 20467.397 - 20568.222: 99.7228% ( 4) 00:30:33.351 20568.222 - 20669.046: 99.7480% ( 4) 00:30:33.351 20669.046 - 20769.871: 99.7732% ( 4) 00:30:33.351 20769.871 - 20870.695: 99.7984% ( 4) 00:30:33.351 20870.695 - 20971.520: 99.8236% ( 4) 00:30:33.351 20971.520 - 21072.345: 99.8488% ( 4) 00:30:33.351 21072.345 - 21173.169: 99.8740% ( 4) 00:30:33.351 21173.169 - 21273.994: 99.8992% ( 4) 00:30:33.351 21273.994 - 21374.818: 99.9244% ( 4) 00:30:33.351 21374.818 - 21475.643: 99.9496% ( 4) 00:30:33.351 21475.643 - 21576.468: 99.9748% ( 4) 00:30:33.351 21576.468 - 21677.292: 100.0000% ( 4) 00:30:33.351 00:30:33.609 ************************************ 00:30:33.609 END TEST nvme_perf 00:30:33.609 ************************************ 00:30:33.609 15:55:54 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:33.609 00:30:33.609 real 0m2.521s 00:30:33.609 user 0m2.199s 00:30:33.609 sys 0m0.211s 00:30:33.609 15:55:54 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.609 15:55:54 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.609 15:55:54 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:33.609 15:55:54 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:33.609 15:55:54 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.609 15:55:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:33.609 ************************************ 00:30:33.609 START TEST nvme_hello_world 00:30:33.609 ************************************ 00:30:33.609 15:55:54 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:33.609 Initializing NVMe Controllers 00:30:33.609 Attached to 0000:00:10.0 00:30:33.609 Namespace ID: 1 size: 6GB 00:30:33.609 Attached to 0000:00:11.0 00:30:33.609 Namespace ID: 1 size: 5GB 00:30:33.609 Attached to 0000:00:13.0 00:30:33.609 Namespace ID: 1 size: 1GB 00:30:33.609 Attached to 0000:00:12.0 00:30:33.609 Namespace ID: 1 size: 4GB 00:30:33.609 Namespace ID: 2 size: 4GB 00:30:33.609 Namespace ID: 3 size: 4GB 00:30:33.609 Initialization complete. 00:30:33.609 INFO: using host memory buffer for IO 00:30:33.609 Hello world! 00:30:33.610 INFO: using host memory buffer for IO 00:30:33.610 Hello world! 00:30:33.610 INFO: using host memory buffer for IO 00:30:33.610 Hello world! 00:30:33.610 INFO: using host memory buffer for IO 00:30:33.610 Hello world! 00:30:33.610 INFO: using host memory buffer for IO 00:30:33.610 Hello world! 00:30:33.610 INFO: using host memory buffer for IO 00:30:33.610 Hello world! 00:30:33.867 ************************************ 00:30:33.867 END TEST nvme_hello_world 00:30:33.867 ************************************ 00:30:33.867 00:30:33.867 real 0m0.218s 00:30:33.867 user 0m0.089s 00:30:33.867 sys 0m0.087s 00:30:33.867 15:55:54 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.867 15:55:54 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:33.867 15:55:55 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:33.867 15:55:55 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:33.867 15:55:55 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.867 15:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:33.867 ************************************ 00:30:33.867 START TEST nvme_sgl 00:30:33.867 ************************************ 00:30:33.867 15:55:55 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:33.867 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:30:33.867 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:30:33.867 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:30:34.125 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:30:34.125 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:30:34.125 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:30:34.125 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:30:34.125 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:30:34.125 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:30:34.125 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:30:34.125 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:30:34.125 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:30:34.125 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:30:34.125 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:30:34.125 NVMe Readv/Writev Request test 00:30:34.125 Attached to 0000:00:10.0 00:30:34.125 Attached to 0000:00:11.0 00:30:34.125 Attached to 0000:00:13.0 00:30:34.125 Attached to 0000:00:12.0 00:30:34.125 0000:00:10.0: build_io_request_2 test passed 00:30:34.125 0000:00:10.0: build_io_request_4 test passed 00:30:34.125 0000:00:10.0: build_io_request_5 test passed 00:30:34.125 0000:00:10.0: build_io_request_6 test passed 00:30:34.125 0000:00:10.0: build_io_request_7 test passed 00:30:34.125 0000:00:10.0: build_io_request_10 test passed 00:30:34.125 0000:00:11.0: build_io_request_2 test passed 00:30:34.125 0000:00:11.0: build_io_request_4 test passed 00:30:34.125 0000:00:11.0: build_io_request_5 test passed 00:30:34.125 0000:00:11.0: build_io_request_6 test passed 00:30:34.125 0000:00:11.0: build_io_request_7 test passed 00:30:34.125 0000:00:11.0: build_io_request_10 test passed 00:30:34.125 Cleaning up... 00:30:34.125 ************************************ 00:30:34.125 END TEST nvme_sgl 00:30:34.125 ************************************ 00:30:34.125 00:30:34.125 real 0m0.312s 00:30:34.125 user 0m0.168s 00:30:34.125 sys 0m0.099s 00:30:34.125 15:55:55 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:34.125 15:55:55 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:30:34.125 15:55:55 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:34.125 15:55:55 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:34.125 15:55:55 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.125 15:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:34.125 ************************************ 00:30:34.125 START TEST nvme_e2edp 00:30:34.125 ************************************ 00:30:34.125 15:55:55 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:34.383 NVMe Write/Read with End-to-End data protection test 00:30:34.383 Attached to 0000:00:10.0 00:30:34.383 Attached to 0000:00:11.0 00:30:34.383 Attached to 0000:00:13.0 00:30:34.383 Attached to 0000:00:12.0 00:30:34.383 Cleaning up... 00:30:34.383 ************************************ 00:30:34.383 END TEST nvme_e2edp 00:30:34.383 ************************************ 00:30:34.383 00:30:34.383 real 0m0.226s 00:30:34.383 user 0m0.073s 00:30:34.383 sys 0m0.102s 00:30:34.383 15:55:55 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:34.383 15:55:55 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:30:34.383 15:55:55 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:34.383 15:55:55 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:34.383 15:55:55 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.383 15:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:34.383 ************************************ 00:30:34.383 START TEST nvme_reserve 00:30:34.383 ************************************ 00:30:34.383 15:55:55 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:34.641 ===================================================== 00:30:34.641 NVMe Controller at PCI bus 0, device 16, function 0 00:30:34.641 ===================================================== 00:30:34.641 Reservations: Not Supported 00:30:34.641 ===================================================== 00:30:34.641 NVMe Controller at PCI bus 0, device 17, function 0 00:30:34.641 ===================================================== 00:30:34.641 Reservations: Not Supported 00:30:34.641 ===================================================== 00:30:34.641 NVMe Controller at PCI bus 0, device 19, function 0 00:30:34.641 ===================================================== 00:30:34.641 Reservations: Not Supported 00:30:34.641 ===================================================== 00:30:34.641 NVMe Controller at PCI bus 0, device 18, function 0 00:30:34.641 ===================================================== 00:30:34.641 Reservations: Not Supported 00:30:34.641 Reservation test passed 00:30:34.641 ************************************ 00:30:34.641 END TEST nvme_reserve 00:30:34.641 ************************************ 00:30:34.641 00:30:34.641 real 0m0.217s 00:30:34.641 user 0m0.076s 00:30:34.641 sys 0m0.095s 00:30:34.641 15:55:55 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:34.641 15:55:55 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:30:34.641 15:55:55 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:34.641 15:55:55 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:34.641 15:55:55 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.641 15:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:34.641 ************************************ 00:30:34.641 START TEST nvme_err_injection 00:30:34.641 ************************************ 00:30:34.641 15:55:55 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:34.899 NVMe Error Injection test 00:30:34.899 Attached to 0000:00:10.0 00:30:34.899 Attached to 0000:00:11.0 00:30:34.899 Attached to 0000:00:13.0 00:30:34.899 Attached to 0000:00:12.0 00:30:34.899 0000:00:10.0: get features failed as expected 00:30:34.899 0000:00:11.0: get features failed as expected 00:30:34.899 0000:00:13.0: get features failed as expected 00:30:34.899 0000:00:12.0: get features failed as expected 00:30:34.899 0000:00:10.0: get features successfully as expected 00:30:34.899 0000:00:11.0: get features successfully as expected 00:30:34.899 0000:00:13.0: get features successfully as expected 00:30:34.899 0000:00:12.0: get features successfully as expected 00:30:34.899 0000:00:10.0: read failed as expected 00:30:34.899 0000:00:11.0: read failed as expected 00:30:34.899 0000:00:13.0: read failed as expected 00:30:34.899 0000:00:12.0: read failed as expected 00:30:34.899 0000:00:10.0: read successfully as expected 00:30:34.899 0000:00:11.0: read successfully as expected 00:30:34.899 0000:00:13.0: read successfully as expected 00:30:34.899 0000:00:12.0: read successfully as expected 00:30:34.899 Cleaning up... 00:30:34.899 ************************************ 00:30:34.899 END TEST nvme_err_injection 00:30:34.899 ************************************ 00:30:34.899 00:30:34.899 real 0m0.267s 00:30:34.899 user 0m0.102s 00:30:34.899 sys 0m0.119s 00:30:34.899 15:55:56 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:34.899 15:55:56 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:30:34.899 15:55:56 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:34.899 15:55:56 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:30:34.899 15:55:56 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.899 15:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:34.899 ************************************ 00:30:34.899 START TEST nvme_overhead 00:30:34.899 ************************************ 00:30:34.899 15:55:56 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:36.274 Initializing NVMe Controllers 00:30:36.274 Attached to 0000:00:10.0 00:30:36.274 Attached to 0000:00:11.0 00:30:36.274 Attached to 0000:00:13.0 00:30:36.274 Attached to 0000:00:12.0 00:30:36.274 Initialization complete. Launching workers. 00:30:36.274 submit (in ns) avg, min, max = 11742.3, 9857.7, 191707.7 00:30:36.274 complete (in ns) avg, min, max = 7853.6, 7228.5, 118350.8 00:30:36.274 00:30:36.274 Submit histogram 00:30:36.274 ================ 00:30:36.274 Range in us Cumulative Count 00:30:36.274 9.846 - 9.895: 0.0132% ( 2) 00:30:36.274 10.043 - 10.092: 0.0198% ( 1) 00:30:36.274 10.191 - 10.240: 0.0331% ( 2) 00:30:36.274 10.338 - 10.388: 0.0397% ( 1) 00:30:36.274 10.388 - 10.437: 0.0463% ( 1) 00:30:36.274 10.437 - 10.486: 0.0529% ( 1) 00:30:36.274 10.634 - 10.683: 0.0661% ( 2) 00:30:36.274 10.683 - 10.732: 0.1058% ( 6) 00:30:36.274 10.732 - 10.782: 0.1455% ( 6) 00:30:36.274 10.782 - 10.831: 0.2380% ( 14) 00:30:36.274 10.831 - 10.880: 0.4364% ( 30) 00:30:36.274 10.880 - 10.929: 1.0645% ( 95) 00:30:36.274 10.929 - 10.978: 2.6512% ( 240) 00:30:36.274 10.978 - 11.028: 6.2942% ( 551) 00:30:36.274 11.028 - 11.077: 12.4562% ( 932) 00:30:36.274 11.077 - 11.126: 21.2694% ( 1333) 00:30:36.274 11.126 - 11.175: 31.1074% ( 1488) 00:30:36.274 11.175 - 11.225: 40.8463% ( 1473) 00:30:36.274 11.225 - 11.274: 50.4992% ( 1460) 00:30:36.274 11.274 - 11.323: 57.5736% ( 1070) 00:30:36.274 11.323 - 11.372: 62.2744% ( 711) 00:30:36.274 11.372 - 11.422: 65.4347% ( 478) 00:30:36.274 11.422 - 11.471: 67.7752% ( 354) 00:30:36.274 11.471 - 11.520: 69.4413% ( 252) 00:30:36.274 11.520 - 11.569: 71.0479% ( 243) 00:30:36.274 11.569 - 11.618: 72.3570% ( 198) 00:30:36.274 11.618 - 11.668: 73.7058% ( 204) 00:30:36.274 11.668 - 11.717: 75.0744% ( 207) 00:30:36.274 11.717 - 11.766: 76.7207% ( 249) 00:30:36.274 11.766 - 11.815: 78.1752% ( 220) 00:30:36.274 11.815 - 11.865: 79.6760% ( 227) 00:30:36.274 11.865 - 11.914: 81.4347% ( 266) 00:30:36.274 11.914 - 11.963: 83.0281% ( 241) 00:30:36.274 11.963 - 12.012: 84.4893% ( 221) 00:30:36.274 12.012 - 12.062: 85.9372% ( 219) 00:30:36.274 12.062 - 12.111: 87.5967% ( 251) 00:30:36.274 12.111 - 12.160: 89.0579% ( 221) 00:30:36.274 12.160 - 12.209: 90.3273% ( 192) 00:30:36.274 12.209 - 12.258: 91.5372% ( 183) 00:30:36.274 12.258 - 12.308: 92.5884% ( 159) 00:30:36.274 12.308 - 12.357: 93.4942% ( 137) 00:30:36.274 12.357 - 12.406: 94.0628% ( 86) 00:30:36.274 12.406 - 12.455: 94.5719% ( 77) 00:30:36.274 12.455 - 12.505: 94.8562% ( 43) 00:30:36.274 12.505 - 12.554: 95.1273% ( 41) 00:30:36.274 12.554 - 12.603: 95.3256% ( 30) 00:30:36.274 12.603 - 12.702: 95.5372% ( 32) 00:30:36.274 12.702 - 12.800: 95.6562% ( 18) 00:30:36.274 12.800 - 12.898: 95.7884% ( 20) 00:30:36.274 12.898 - 12.997: 95.8744% ( 13) 00:30:36.274 12.997 - 13.095: 96.0000% ( 19) 00:30:36.274 13.095 - 13.194: 96.0926% ( 14) 00:30:36.274 13.194 - 13.292: 96.1785% ( 13) 00:30:36.274 13.292 - 13.391: 96.3107% ( 20) 00:30:36.274 13.391 - 13.489: 96.3835% ( 11) 00:30:36.274 13.489 - 13.588: 96.4562% ( 11) 00:30:36.274 13.588 - 13.686: 96.5554% ( 15) 00:30:36.274 13.686 - 13.785: 96.6479% ( 14) 00:30:36.274 13.785 - 13.883: 96.7405% ( 14) 00:30:36.274 13.883 - 13.982: 96.7934% ( 8) 00:30:36.274 13.982 - 14.080: 96.8992% ( 16) 00:30:36.274 14.080 - 14.178: 96.9455% ( 7) 00:30:36.274 14.178 - 14.277: 97.0380% ( 14) 00:30:36.274 14.277 - 14.375: 97.0777% ( 6) 00:30:36.274 14.375 - 14.474: 97.1372% ( 9) 00:30:36.274 14.474 - 14.572: 97.1835% ( 7) 00:30:36.274 14.572 - 14.671: 97.2033% ( 3) 00:30:36.274 14.671 - 14.769: 97.2496% ( 7) 00:30:36.274 14.769 - 14.868: 97.3025% ( 8) 00:30:36.274 14.868 - 14.966: 97.3620% ( 9) 00:30:36.274 14.966 - 15.065: 97.4017% ( 6) 00:30:36.274 15.065 - 15.163: 97.4612% ( 9) 00:30:36.274 15.163 - 15.262: 97.4876% ( 4) 00:30:36.274 15.262 - 15.360: 97.5074% ( 3) 00:30:36.274 15.360 - 15.458: 97.5537% ( 7) 00:30:36.274 15.458 - 15.557: 97.5603% ( 1) 00:30:36.274 15.557 - 15.655: 97.5934% ( 5) 00:30:36.274 15.655 - 15.754: 97.6066% ( 2) 00:30:36.274 15.754 - 15.852: 97.6132% ( 1) 00:30:36.274 15.852 - 15.951: 97.6529% ( 6) 00:30:36.274 15.951 - 16.049: 97.6595% ( 1) 00:30:36.274 16.049 - 16.148: 97.6661% ( 1) 00:30:36.274 16.148 - 16.246: 97.6793% ( 2) 00:30:36.274 16.246 - 16.345: 97.6860% ( 1) 00:30:36.274 16.345 - 16.443: 97.6926% ( 1) 00:30:36.274 16.443 - 16.542: 97.7124% ( 3) 00:30:36.274 16.542 - 16.640: 97.7256% ( 2) 00:30:36.274 16.640 - 16.738: 97.7521% ( 4) 00:30:36.274 16.738 - 16.837: 97.7719% ( 3) 00:30:36.274 16.837 - 16.935: 97.8050% ( 5) 00:30:36.274 16.935 - 17.034: 97.8579% ( 8) 00:30:36.274 17.034 - 17.132: 97.9438% ( 13) 00:30:36.274 17.132 - 17.231: 98.0033% ( 9) 00:30:36.274 17.231 - 17.329: 98.0562% ( 8) 00:30:36.274 17.329 - 17.428: 98.0959% ( 6) 00:30:36.274 17.428 - 17.526: 98.1355% ( 6) 00:30:36.274 17.526 - 17.625: 98.1950% ( 9) 00:30:36.274 17.625 - 17.723: 98.2612% ( 10) 00:30:36.274 17.723 - 17.822: 98.3074% ( 7) 00:30:36.274 17.822 - 17.920: 98.3934% ( 13) 00:30:36.274 17.920 - 18.018: 98.4529% ( 9) 00:30:36.274 18.018 - 18.117: 98.5190% ( 10) 00:30:36.274 18.117 - 18.215: 98.5785% ( 9) 00:30:36.274 18.215 - 18.314: 98.6116% ( 5) 00:30:36.274 18.314 - 18.412: 98.6711% ( 9) 00:30:36.274 18.412 - 18.511: 98.7107% ( 6) 00:30:36.274 18.511 - 18.609: 98.7570% ( 7) 00:30:36.274 18.609 - 18.708: 98.7967% ( 6) 00:30:36.274 18.708 - 18.806: 98.8298% ( 5) 00:30:36.274 18.806 - 18.905: 98.8694% ( 6) 00:30:36.274 18.905 - 19.003: 98.9157% ( 7) 00:30:36.274 19.003 - 19.102: 98.9554% ( 6) 00:30:36.274 19.102 - 19.200: 99.0017% ( 7) 00:30:36.274 19.200 - 19.298: 99.0347% ( 5) 00:30:36.274 19.298 - 19.397: 99.0413% ( 1) 00:30:36.274 19.397 - 19.495: 99.0545% ( 2) 00:30:36.274 19.495 - 19.594: 99.0810% ( 4) 00:30:36.274 19.594 - 19.692: 99.0876% ( 1) 00:30:36.274 19.692 - 19.791: 99.1074% ( 3) 00:30:36.274 19.791 - 19.889: 99.1207% ( 2) 00:30:36.274 19.889 - 19.988: 99.1273% ( 1) 00:30:36.274 19.988 - 20.086: 99.1339% ( 1) 00:30:36.274 20.086 - 20.185: 99.1471% ( 2) 00:30:36.274 20.283 - 20.382: 99.1537% ( 1) 00:30:36.274 20.480 - 20.578: 99.1669% ( 2) 00:30:36.274 20.677 - 20.775: 99.1736% ( 1) 00:30:36.274 20.874 - 20.972: 99.1802% ( 1) 00:30:36.274 20.972 - 21.071: 99.1934% ( 2) 00:30:36.274 21.071 - 21.169: 99.2000% ( 1) 00:30:36.274 21.169 - 21.268: 99.2132% ( 2) 00:30:36.274 21.366 - 21.465: 99.2198% ( 1) 00:30:36.274 21.465 - 21.563: 99.2264% ( 1) 00:30:36.274 21.563 - 21.662: 99.2331% ( 1) 00:30:36.274 21.858 - 21.957: 99.2463% ( 2) 00:30:36.274 21.957 - 22.055: 99.2595% ( 2) 00:30:36.274 22.055 - 22.154: 99.2727% ( 2) 00:30:36.274 22.154 - 22.252: 99.2793% ( 1) 00:30:36.275 22.646 - 22.745: 99.2860% ( 1) 00:30:36.275 23.828 - 23.926: 99.2926% ( 1) 00:30:36.275 24.517 - 24.615: 99.2992% ( 1) 00:30:36.275 24.714 - 24.812: 99.3058% ( 1) 00:30:36.275 25.108 - 25.206: 99.3124% ( 1) 00:30:36.275 25.403 - 25.600: 99.3256% ( 2) 00:30:36.275 26.782 - 26.978: 99.3322% ( 1) 00:30:36.275 27.963 - 28.160: 99.3388% ( 1) 00:30:36.275 28.554 - 28.751: 99.3455% ( 1) 00:30:36.275 28.948 - 29.145: 99.3521% ( 1) 00:30:36.275 30.720 - 30.917: 99.3587% ( 1) 00:30:36.275 31.508 - 31.705: 99.3653% ( 1) 00:30:36.275 31.705 - 31.902: 99.3917% ( 4) 00:30:36.275 31.902 - 32.098: 99.5174% ( 19) 00:30:36.275 32.098 - 32.295: 99.6165% ( 15) 00:30:36.275 32.295 - 32.492: 99.7355% ( 18) 00:30:36.275 32.492 - 32.689: 99.8083% ( 11) 00:30:36.275 32.689 - 32.886: 99.8545% ( 7) 00:30:36.275 32.886 - 33.083: 99.8744% ( 3) 00:30:36.275 33.083 - 33.280: 99.8876% ( 2) 00:30:36.275 33.280 - 33.477: 99.8942% ( 1) 00:30:36.275 33.871 - 34.068: 99.9008% ( 1) 00:30:36.275 34.068 - 34.265: 99.9140% ( 2) 00:30:36.275 35.840 - 36.037: 99.9207% ( 1) 00:30:36.275 36.234 - 36.431: 99.9273% ( 1) 00:30:36.275 39.778 - 39.975: 99.9339% ( 1) 00:30:36.275 40.566 - 40.763: 99.9405% ( 1) 00:30:36.275 41.157 - 41.354: 99.9471% ( 1) 00:30:36.275 43.520 - 43.717: 99.9537% ( 1) 00:30:36.275 48.443 - 48.640: 99.9603% ( 1) 00:30:36.275 50.215 - 50.412: 99.9669% ( 1) 00:30:36.275 55.138 - 55.532: 99.9736% ( 1) 00:30:36.275 63.803 - 64.197: 99.9802% ( 1) 00:30:36.275 77.588 - 77.982: 99.9868% ( 1) 00:30:36.275 91.372 - 91.766: 99.9934% ( 1) 00:30:36.275 191.409 - 192.197: 100.0000% ( 1) 00:30:36.275 00:30:36.275 Complete histogram 00:30:36.275 ================== 00:30:36.275 Range in us Cumulative Count 00:30:36.275 7.188 - 7.237: 0.0066% ( 1) 00:30:36.275 7.237 - 7.286: 0.1851% ( 27) 00:30:36.275 7.286 - 7.335: 1.4083% ( 185) 00:30:36.275 7.335 - 7.385: 5.4281% ( 608) 00:30:36.275 7.385 - 7.434: 15.4050% ( 1509) 00:30:36.275 7.434 - 7.483: 30.4066% ( 2269) 00:30:36.275 7.483 - 7.532: 46.7372% ( 2470) 00:30:36.275 7.532 - 7.582: 60.1521% ( 2029) 00:30:36.275 7.582 - 7.631: 69.5537% ( 1422) 00:30:36.275 7.631 - 7.680: 76.3835% ( 1033) 00:30:36.275 7.680 - 7.729: 80.8860% ( 681) 00:30:36.275 7.729 - 7.778: 84.0198% ( 474) 00:30:36.275 7.778 - 7.828: 86.2545% ( 338) 00:30:36.275 7.828 - 7.877: 87.6033% ( 204) 00:30:36.275 7.877 - 7.926: 88.6281% ( 155) 00:30:36.275 7.926 - 7.975: 89.3223% ( 105) 00:30:36.275 7.975 - 8.025: 89.8909% ( 86) 00:30:36.275 8.025 - 8.074: 90.5124% ( 94) 00:30:36.275 8.074 - 8.123: 91.2132% ( 106) 00:30:36.275 8.123 - 8.172: 92.0860% ( 132) 00:30:36.275 8.172 - 8.222: 92.7736% ( 104) 00:30:36.275 8.222 - 8.271: 93.4612% ( 104) 00:30:36.275 8.271 - 8.320: 94.1025% ( 97) 00:30:36.275 8.320 - 8.369: 94.7306% ( 95) 00:30:36.275 8.369 - 8.418: 95.3256% ( 90) 00:30:36.275 8.418 - 8.468: 95.8612% ( 81) 00:30:36.275 8.468 - 8.517: 96.2843% ( 64) 00:30:36.275 8.517 - 8.566: 96.5686% ( 43) 00:30:36.275 8.566 - 8.615: 96.7736% ( 31) 00:30:36.275 8.615 - 8.665: 96.9455% ( 26) 00:30:36.275 8.665 - 8.714: 97.0446% ( 15) 00:30:36.275 8.714 - 8.763: 97.1240% ( 12) 00:30:36.275 8.763 - 8.812: 97.2562% ( 20) 00:30:36.275 8.812 - 8.862: 97.2826% ( 4) 00:30:36.275 8.862 - 8.911: 97.3091% ( 4) 00:30:36.275 8.911 - 8.960: 97.3289% ( 3) 00:30:36.275 8.960 - 9.009: 97.3554% ( 4) 00:30:36.275 9.009 - 9.058: 97.3950% ( 6) 00:30:36.275 9.108 - 9.157: 97.4083% ( 2) 00:30:36.275 9.157 - 9.206: 97.4215% ( 2) 00:30:36.275 9.206 - 9.255: 97.4281% ( 1) 00:30:36.275 9.255 - 9.305: 97.4347% ( 1) 00:30:36.275 9.305 - 9.354: 97.4545% ( 3) 00:30:36.275 9.354 - 9.403: 97.4612% ( 1) 00:30:36.275 9.403 - 9.452: 97.4678% ( 1) 00:30:36.275 9.452 - 9.502: 97.4744% ( 1) 00:30:36.275 9.502 - 9.551: 97.4810% ( 1) 00:30:36.275 9.600 - 9.649: 97.4876% ( 1) 00:30:36.275 9.649 - 9.698: 97.5008% ( 2) 00:30:36.275 9.748 - 9.797: 97.5140% ( 2) 00:30:36.275 9.846 - 9.895: 97.5207% ( 1) 00:30:36.275 9.895 - 9.945: 97.5405% ( 3) 00:30:36.275 9.945 - 9.994: 97.5471% ( 1) 00:30:36.275 10.043 - 10.092: 97.5603% ( 2) 00:30:36.275 10.142 - 10.191: 97.5868% ( 4) 00:30:36.275 10.191 - 10.240: 97.5934% ( 1) 00:30:36.275 10.289 - 10.338: 97.6000% ( 1) 00:30:36.275 10.338 - 10.388: 97.6132% ( 2) 00:30:36.275 10.388 - 10.437: 97.6463% ( 5) 00:30:36.275 10.437 - 10.486: 97.6595% ( 2) 00:30:36.275 10.535 - 10.585: 97.6793% ( 3) 00:30:36.275 10.634 - 10.683: 97.6860% ( 1) 00:30:36.275 10.683 - 10.732: 97.6926% ( 1) 00:30:36.275 10.782 - 10.831: 97.7058% ( 2) 00:30:36.275 10.831 - 10.880: 97.7190% ( 2) 00:30:36.275 10.929 - 10.978: 97.7256% ( 1) 00:30:36.275 10.978 - 11.028: 97.7455% ( 3) 00:30:36.275 11.175 - 11.225: 97.7521% ( 1) 00:30:36.275 11.225 - 11.274: 97.7653% ( 2) 00:30:36.275 11.372 - 11.422: 97.7719% ( 1) 00:30:36.275 11.471 - 11.520: 97.7983% ( 4) 00:30:36.275 11.520 - 11.569: 97.8050% ( 1) 00:30:36.275 12.012 - 12.062: 97.8116% ( 1) 00:30:36.275 12.209 - 12.258: 97.8182% ( 1) 00:30:36.275 12.258 - 12.308: 97.8248% ( 1) 00:30:36.275 12.702 - 12.800: 97.8314% ( 1) 00:30:36.275 12.800 - 12.898: 97.8380% ( 1) 00:30:36.275 12.997 - 13.095: 97.8446% ( 1) 00:30:36.275 13.095 - 13.194: 97.8512% ( 1) 00:30:36.275 13.194 - 13.292: 97.8777% ( 4) 00:30:36.275 13.292 - 13.391: 97.9174% ( 6) 00:30:36.275 13.391 - 13.489: 97.9372% ( 3) 00:30:36.275 13.489 - 13.588: 97.9702% ( 5) 00:30:36.275 13.588 - 13.686: 98.0033% ( 5) 00:30:36.275 13.686 - 13.785: 98.0959% ( 14) 00:30:36.275 13.785 - 13.883: 98.1421% ( 7) 00:30:36.275 13.883 - 13.982: 98.1884% ( 7) 00:30:36.275 13.982 - 14.080: 98.2281% ( 6) 00:30:36.275 14.080 - 14.178: 98.2810% ( 8) 00:30:36.275 14.178 - 14.277: 98.3140% ( 5) 00:30:36.275 14.277 - 14.375: 98.3603% ( 7) 00:30:36.275 14.375 - 14.474: 98.4529% ( 14) 00:30:36.275 14.474 - 14.572: 98.4992% ( 7) 00:30:36.275 14.572 - 14.671: 98.5388% ( 6) 00:30:36.275 14.671 - 14.769: 98.5851% ( 7) 00:30:36.275 14.769 - 14.868: 98.6182% ( 5) 00:30:36.275 14.868 - 14.966: 98.6777% ( 9) 00:30:36.275 14.966 - 15.065: 98.6975% ( 3) 00:30:36.275 15.065 - 15.163: 98.7240% ( 4) 00:30:36.275 15.163 - 15.262: 98.7835% ( 9) 00:30:36.275 15.262 - 15.360: 98.8231% ( 6) 00:30:36.275 15.360 - 15.458: 98.8628% ( 6) 00:30:36.275 15.458 - 15.557: 98.8826% ( 3) 00:30:36.275 15.557 - 15.655: 98.8959% ( 2) 00:30:36.275 15.655 - 15.754: 98.9289% ( 5) 00:30:36.275 15.754 - 15.852: 98.9620% ( 5) 00:30:36.275 15.852 - 15.951: 98.9818% ( 3) 00:30:36.275 15.951 - 16.049: 98.9884% ( 1) 00:30:36.275 16.049 - 16.148: 99.0347% ( 7) 00:30:36.275 16.148 - 16.246: 99.0479% ( 2) 00:30:36.275 16.246 - 16.345: 99.0744% ( 4) 00:30:36.275 16.345 - 16.443: 99.0876% ( 2) 00:30:36.275 16.542 - 16.640: 99.0942% ( 1) 00:30:36.275 16.640 - 16.738: 99.1273% ( 5) 00:30:36.275 16.738 - 16.837: 99.1405% ( 2) 00:30:36.275 16.837 - 16.935: 99.1471% ( 1) 00:30:36.275 17.034 - 17.132: 99.1537% ( 1) 00:30:36.275 17.231 - 17.329: 99.1603% ( 1) 00:30:36.275 17.329 - 17.428: 99.1669% ( 1) 00:30:36.275 17.526 - 17.625: 99.1736% ( 1) 00:30:36.275 17.625 - 17.723: 99.1868% ( 2) 00:30:36.275 18.018 - 18.117: 99.1934% ( 1) 00:30:36.275 18.117 - 18.215: 99.2066% ( 2) 00:30:36.275 18.215 - 18.314: 99.2132% ( 1) 00:30:36.275 18.314 - 18.412: 99.2264% ( 2) 00:30:36.275 18.412 - 18.511: 99.2331% ( 1) 00:30:36.275 18.609 - 18.708: 99.2397% ( 1) 00:30:36.275 18.806 - 18.905: 99.2463% ( 1) 00:30:36.275 18.905 - 19.003: 99.2529% ( 1) 00:30:36.275 19.003 - 19.102: 99.2595% ( 1) 00:30:36.275 19.102 - 19.200: 99.2661% ( 1) 00:30:36.275 19.200 - 19.298: 99.2727% ( 1) 00:30:36.275 19.988 - 20.086: 99.2793% ( 1) 00:30:36.275 20.086 - 20.185: 99.2860% ( 1) 00:30:36.275 20.283 - 20.382: 99.2926% ( 1) 00:30:36.275 20.480 - 20.578: 99.2992% ( 1) 00:30:36.275 20.578 - 20.677: 99.3058% ( 1) 00:30:36.275 20.775 - 20.874: 99.3124% ( 1) 00:30:36.275 21.071 - 21.169: 99.3256% ( 2) 00:30:36.275 21.563 - 21.662: 99.3322% ( 1) 00:30:36.275 21.858 - 21.957: 99.3388% ( 1) 00:30:36.275 22.252 - 22.351: 99.3983% ( 9) 00:30:36.275 22.351 - 22.449: 99.4579% ( 9) 00:30:36.275 22.449 - 22.548: 99.5306% ( 11) 00:30:36.275 22.548 - 22.646: 99.6562% ( 19) 00:30:36.275 22.646 - 22.745: 99.7421% ( 13) 00:30:36.275 22.745 - 22.843: 99.7950% ( 8) 00:30:36.275 22.843 - 22.942: 99.8413% ( 7) 00:30:36.275 22.942 - 23.040: 99.8545% ( 2) 00:30:36.275 23.040 - 23.138: 99.8612% ( 1) 00:30:36.275 23.138 - 23.237: 99.8876% ( 4) 00:30:36.275 23.237 - 23.335: 99.8942% ( 1) 00:30:36.275 23.434 - 23.532: 99.9074% ( 2) 00:30:36.276 23.631 - 23.729: 99.9140% ( 1) 00:30:36.276 24.714 - 24.812: 99.9207% ( 1) 00:30:36.276 25.797 - 25.994: 99.9273% ( 1) 00:30:36.276 27.372 - 27.569: 99.9339% ( 1) 00:30:36.276 27.766 - 27.963: 99.9405% ( 1) 00:30:36.276 28.554 - 28.751: 99.9471% ( 1) 00:30:36.276 31.705 - 31.902: 99.9537% ( 1) 00:30:36.276 32.886 - 33.083: 99.9603% ( 1) 00:30:36.276 34.265 - 34.462: 99.9669% ( 1) 00:30:36.276 42.929 - 43.126: 99.9736% ( 1) 00:30:36.276 43.520 - 43.717: 99.9802% ( 1) 00:30:36.276 61.046 - 61.440: 99.9868% ( 1) 00:30:36.276 69.711 - 70.105: 99.9934% ( 1) 00:30:36.276 118.154 - 118.942: 100.0000% ( 1) 00:30:36.276 00:30:36.276 ************************************ 00:30:36.276 END TEST nvme_overhead 00:30:36.276 ************************************ 00:30:36.276 00:30:36.276 real 0m1.226s 00:30:36.276 user 0m1.079s 00:30:36.276 sys 0m0.098s 00:30:36.276 15:55:57 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:36.276 15:55:57 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:30:36.276 15:55:57 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:36.276 15:55:57 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:30:36.276 15:55:57 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:36.276 15:55:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:36.276 ************************************ 00:30:36.276 START TEST nvme_arbitration 00:30:36.276 ************************************ 00:30:36.276 15:55:57 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:39.558 Initializing NVMe Controllers 00:30:39.558 Attached to 0000:00:10.0 00:30:39.558 Attached to 0000:00:11.0 00:30:39.558 Attached to 0000:00:13.0 00:30:39.558 Attached to 0000:00:12.0 00:30:39.558 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:39.558 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:30:39.558 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:30:39.558 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:30:39.558 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:30:39.558 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:30:39.558 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:39.558 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:39.558 Initialization complete. Launching workers. 00:30:39.559 Starting thread on core 1 with urgent priority queue 00:30:39.559 Starting thread on core 2 with urgent priority queue 00:30:39.559 Starting thread on core 3 with urgent priority queue 00:30:39.559 Starting thread on core 0 with urgent priority queue 00:30:39.559 QEMU NVMe Ctrl (12340 ) core 0: 981.33 IO/s 101.90 secs/100000 ios 00:30:39.559 QEMU NVMe Ctrl (12342 ) core 0: 981.33 IO/s 101.90 secs/100000 ios 00:30:39.559 QEMU NVMe Ctrl (12341 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:30:39.559 QEMU NVMe Ctrl (12342 ) core 1: 981.33 IO/s 101.90 secs/100000 ios 00:30:39.559 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:30:39.559 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:30:39.559 ======================================================== 00:30:39.559 00:30:39.559 ************************************ 00:30:39.559 END TEST nvme_arbitration 00:30:39.559 ************************************ 00:30:39.559 00:30:39.559 real 0m3.325s 00:30:39.559 user 0m9.302s 00:30:39.559 sys 0m0.109s 00:30:39.559 15:56:00 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.559 15:56:00 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:30:39.559 15:56:00 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:39.559 15:56:00 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:30:39.559 15:56:00 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:39.559 15:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.559 ************************************ 00:30:39.559 START TEST nvme_single_aen 00:30:39.559 ************************************ 00:30:39.559 15:56:00 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:39.816 Asynchronous Event Request test 00:30:39.816 Attached to 0000:00:10.0 00:30:39.816 Attached to 0000:00:11.0 00:30:39.816 Attached to 0000:00:13.0 00:30:39.816 Attached to 0000:00:12.0 00:30:39.816 Reset controller to setup AER completions for this process 00:30:39.816 Registering asynchronous event callbacks... 00:30:39.816 Getting orig temperature thresholds of all controllers 00:30:39.816 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:39.816 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:39.816 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:39.816 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:39.816 Setting all controllers temperature threshold low to trigger AER 00:30:39.816 Waiting for all controllers temperature threshold to be set lower 00:30:39.816 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:39.816 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:30:39.816 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:39.816 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:30:39.816 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:39.816 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:30:39.816 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:39.816 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:30:39.816 Waiting for all controllers to trigger AER and reset threshold 00:30:39.816 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:39.816 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:39.816 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:39.816 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:39.816 Cleaning up... 00:30:39.816 ************************************ 00:30:39.816 END TEST nvme_single_aen 00:30:39.816 ************************************ 00:30:39.816 00:30:39.816 real 0m0.206s 00:30:39.816 user 0m0.081s 00:30:39.816 sys 0m0.090s 00:30:39.816 15:56:01 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.816 15:56:01 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:30:39.816 15:56:01 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:39.816 15:56:01 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:39.816 15:56:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:39.816 15:56:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:39.816 ************************************ 00:30:39.816 START TEST nvme_doorbell_aers 00:30:39.816 ************************************ 00:30:39.816 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:30:39.816 15:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:30:39.816 15:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:39.816 15:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:39.816 15:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:39.817 15:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:30:40.133 [2024-11-05 15:56:01.345819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:30:50.095 Executing: test_write_invalid_db 00:30:50.095 Waiting for AER completion... 00:30:50.095 Failure: test_write_invalid_db 00:30:50.095 00:30:50.095 Executing: test_invalid_db_write_overflow_sq 00:30:50.095 Waiting for AER completion... 00:30:50.095 Failure: test_invalid_db_write_overflow_sq 00:30:50.095 00:30:50.095 Executing: test_invalid_db_write_overflow_cq 00:30:50.095 Waiting for AER completion... 00:30:50.095 Failure: test_invalid_db_write_overflow_cq 00:30:50.095 00:30:50.095 15:56:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:50.095 15:56:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:30:50.095 [2024-11-05 15:56:11.371433] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:00.128 Executing: test_write_invalid_db 00:31:00.128 Waiting for AER completion... 00:31:00.128 Failure: test_write_invalid_db 00:31:00.128 00:31:00.128 Executing: test_invalid_db_write_overflow_sq 00:31:00.128 Waiting for AER completion... 00:31:00.128 Failure: test_invalid_db_write_overflow_sq 00:31:00.128 00:31:00.128 Executing: test_invalid_db_write_overflow_cq 00:31:00.128 Waiting for AER completion... 00:31:00.128 Failure: test_invalid_db_write_overflow_cq 00:31:00.128 00:31:00.128 15:56:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:00.128 15:56:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:00.128 [2024-11-05 15:56:21.400059] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:10.105 Executing: test_write_invalid_db 00:31:10.105 Waiting for AER completion... 00:31:10.105 Failure: test_write_invalid_db 00:31:10.105 00:31:10.105 Executing: test_invalid_db_write_overflow_sq 00:31:10.105 Waiting for AER completion... 00:31:10.105 Failure: test_invalid_db_write_overflow_sq 00:31:10.105 00:31:10.105 Executing: test_invalid_db_write_overflow_cq 00:31:10.105 Waiting for AER completion... 00:31:10.105 Failure: test_invalid_db_write_overflow_cq 00:31:10.105 00:31:10.105 15:56:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:31:10.105 15:56:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:10.105 [2024-11-05 15:56:31.466953] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.083 Executing: test_write_invalid_db 00:31:20.083 Waiting for AER completion... 00:31:20.083 Failure: test_write_invalid_db 00:31:20.083 00:31:20.083 Executing: test_invalid_db_write_overflow_sq 00:31:20.083 Waiting for AER completion... 00:31:20.083 Failure: test_invalid_db_write_overflow_sq 00:31:20.083 00:31:20.083 Executing: test_invalid_db_write_overflow_cq 00:31:20.083 Waiting for AER completion... 00:31:20.083 Failure: test_invalid_db_write_overflow_cq 00:31:20.083 00:31:20.083 ************************************ 00:31:20.083 END TEST nvme_doorbell_aers 00:31:20.083 ************************************ 00:31:20.083 00:31:20.083 real 0m40.179s 00:31:20.083 user 0m34.164s 00:31:20.083 sys 0m5.631s 00:31:20.083 15:56:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:20.083 15:56:41 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:31:20.083 15:56:41 nvme -- nvme/nvme.sh@97 -- # uname 00:31:20.083 15:56:41 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:20.083 15:56:41 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:20.083 15:56:41 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:31:20.083 15:56:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:20.083 15:56:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:20.083 ************************************ 00:31:20.083 START TEST nvme_multi_aen 00:31:20.083 ************************************ 00:31:20.083 15:56:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:20.341 [2024-11-05 15:56:41.451283] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.451354] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.451364] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.454274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.454396] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.454435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.456879] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.456982] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.457011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.459290] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.459365] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 [2024-11-05 15:56:41.459390] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63176) is not found. Dropping the request. 00:31:20.341 Child process pid: 63702 00:31:20.341 [Child] Asynchronous Event Request test 00:31:20.341 [Child] Attached to 0000:00:10.0 00:31:20.341 [Child] Attached to 0000:00:11.0 00:31:20.341 [Child] Attached to 0000:00:13.0 00:31:20.341 [Child] Attached to 0000:00:12.0 00:31:20.341 [Child] Registering asynchronous event callbacks... 00:31:20.341 [Child] Getting orig temperature thresholds of all controllers 00:31:20.341 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.341 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.341 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.341 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.341 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:20.341 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.341 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.341 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.341 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.341 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.342 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.342 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.342 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.342 [Child] Cleaning up... 00:31:20.599 Asynchronous Event Request test 00:31:20.599 Attached to 0000:00:10.0 00:31:20.599 Attached to 0000:00:11.0 00:31:20.599 Attached to 0000:00:13.0 00:31:20.599 Attached to 0000:00:12.0 00:31:20.599 Reset controller to setup AER completions for this process 00:31:20.599 Registering asynchronous event callbacks... 00:31:20.599 Getting orig temperature thresholds of all controllers 00:31:20.599 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.599 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.599 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.599 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:20.599 Setting all controllers temperature threshold low to trigger AER 00:31:20.599 Waiting for all controllers temperature threshold to be set lower 00:31:20.599 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.599 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:31:20.599 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.599 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:31:20.599 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.599 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:31:20.599 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:20.599 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:31:20.599 Waiting for all controllers to trigger AER and reset threshold 00:31:20.599 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.599 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.599 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.599 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:20.599 Cleaning up... 00:31:20.599 ************************************ 00:31:20.599 END TEST nvme_multi_aen 00:31:20.599 ************************************ 00:31:20.599 00:31:20.599 real 0m0.430s 00:31:20.599 user 0m0.149s 00:31:20.599 sys 0m0.172s 00:31:20.599 15:56:41 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:20.599 15:56:41 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:31:20.599 15:56:41 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:20.599 15:56:41 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:20.599 15:56:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:20.599 15:56:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:20.599 ************************************ 00:31:20.599 START TEST nvme_startup 00:31:20.599 ************************************ 00:31:20.600 15:56:41 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:20.600 Initializing NVMe Controllers 00:31:20.600 Attached to 0000:00:10.0 00:31:20.600 Attached to 0000:00:11.0 00:31:20.600 Attached to 0000:00:13.0 00:31:20.600 Attached to 0000:00:12.0 00:31:20.600 Initialization complete. 00:31:20.600 Time used:141997.188 (us). 00:31:20.600 ************************************ 00:31:20.600 END TEST nvme_startup 00:31:20.600 ************************************ 00:31:20.600 00:31:20.600 real 0m0.197s 00:31:20.600 user 0m0.059s 00:31:20.600 sys 0m0.096s 00:31:20.600 15:56:41 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:20.600 15:56:41 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:31:20.858 15:56:41 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:20.858 15:56:41 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:20.858 15:56:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:20.858 15:56:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:20.858 ************************************ 00:31:20.858 START TEST nvme_multi_secondary 00:31:20.858 ************************************ 00:31:20.858 15:56:41 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:31:20.858 15:56:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63752 00:31:20.858 15:56:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:20.858 15:56:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63753 00:31:20.858 15:56:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:20.858 15:56:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:24.137 Initializing NVMe Controllers 00:31:24.137 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:24.137 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:24.137 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:24.137 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:24.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:24.137 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:24.137 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:24.137 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:24.137 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:24.137 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:24.137 Initialization complete. Launching workers. 00:31:24.137 ======================================================== 00:31:24.137 Latency(us) 00:31:24.137 Device Information : IOPS MiB/s Average min max 00:31:24.137 PCIE (0000:00:10.0) NSID 1 from core 2: 3437.76 13.43 4651.79 824.97 11933.25 00:31:24.137 PCIE (0000:00:11.0) NSID 1 from core 2: 3437.76 13.43 4653.77 848.07 12587.94 00:31:24.137 PCIE (0000:00:13.0) NSID 1 from core 2: 3437.76 13.43 4653.51 851.59 12422.60 00:31:24.137 PCIE (0000:00:12.0) NSID 1 from core 2: 3437.76 13.43 4654.78 840.43 12203.64 00:31:24.137 PCIE (0000:00:12.0) NSID 2 from core 2: 3437.76 13.43 4654.34 865.32 12521.44 00:31:24.137 PCIE (0000:00:12.0) NSID 3 from core 2: 3437.76 13.43 4655.17 853.18 12208.03 00:31:24.137 ======================================================== 00:31:24.137 Total : 20626.53 80.57 4653.90 824.97 12587.94 00:31:24.137 00:31:24.137 15:56:45 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63752 00:31:24.137 Initializing NVMe Controllers 00:31:24.137 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:24.137 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:24.137 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:24.137 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:24.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:24.137 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:24.137 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:24.137 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:24.137 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:24.137 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:24.137 Initialization complete. Launching workers. 00:31:24.137 ======================================================== 00:31:24.137 Latency(us) 00:31:24.137 Device Information : IOPS MiB/s Average min max 00:31:24.137 PCIE (0000:00:10.0) NSID 1 from core 1: 7734.42 30.21 2067.18 995.67 7172.95 00:31:24.137 PCIE (0000:00:11.0) NSID 1 from core 1: 7734.42 30.21 2068.17 1044.03 6507.95 00:31:24.137 PCIE (0000:00:13.0) NSID 1 from core 1: 7734.42 30.21 2068.12 970.99 6548.94 00:31:24.137 PCIE (0000:00:12.0) NSID 1 from core 1: 7734.42 30.21 2068.01 1025.31 6602.33 00:31:24.137 PCIE (0000:00:12.0) NSID 2 from core 1: 7734.42 30.21 2067.95 1021.55 6735.11 00:31:24.137 PCIE (0000:00:12.0) NSID 3 from core 1: 7734.42 30.21 2067.89 920.25 6864.49 00:31:24.137 ======================================================== 00:31:24.137 Total : 46406.49 181.28 2067.89 920.25 7172.95 00:31:24.137 00:31:26.034 Initializing NVMe Controllers 00:31:26.034 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:26.034 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:26.034 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:26.034 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:26.034 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:26.034 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:26.034 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:26.034 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:26.034 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:26.034 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:26.034 Initialization complete. Launching workers. 00:31:26.034 ======================================================== 00:31:26.034 Latency(us) 00:31:26.034 Device Information : IOPS MiB/s Average min max 00:31:26.034 PCIE (0000:00:10.0) NSID 1 from core 0: 10667.29 41.67 1498.68 677.47 5594.91 00:31:26.034 PCIE (0000:00:11.0) NSID 1 from core 0: 10667.29 41.67 1499.59 689.20 5520.49 00:31:26.034 PCIE (0000:00:13.0) NSID 1 from core 0: 10667.29 41.67 1499.63 687.42 5576.73 00:31:26.034 PCIE (0000:00:12.0) NSID 1 from core 0: 10667.29 41.67 1499.67 690.74 5032.04 00:31:26.034 PCIE (0000:00:12.0) NSID 2 from core 0: 10667.29 41.67 1499.70 697.53 4963.90 00:31:26.034 PCIE (0000:00:12.0) NSID 3 from core 0: 10667.29 41.67 1499.74 697.25 4971.30 00:31:26.034 ======================================================== 00:31:26.034 Total : 64003.72 250.01 1499.50 677.47 5594.91 00:31:26.034 00:31:26.034 15:56:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63753 00:31:26.034 15:56:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63823 00:31:26.034 15:56:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:26.034 15:56:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63824 00:31:26.034 15:56:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:26.034 15:56:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:29.313 Initializing NVMe Controllers 00:31:29.313 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:29.313 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:29.313 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:29.313 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:29.313 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:29.313 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:29.313 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:29.313 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:29.313 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:29.313 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:29.313 Initialization complete. Launching workers. 00:31:29.313 ======================================================== 00:31:29.313 Latency(us) 00:31:29.313 Device Information : IOPS MiB/s Average min max 00:31:29.313 PCIE (0000:00:10.0) NSID 1 from core 1: 8211.68 32.08 1947.06 706.48 5530.56 00:31:29.313 PCIE (0000:00:11.0) NSID 1 from core 1: 8211.68 32.08 1948.10 724.12 5776.02 00:31:29.313 PCIE (0000:00:13.0) NSID 1 from core 1: 8211.68 32.08 1948.05 720.18 6018.80 00:31:29.313 PCIE (0000:00:12.0) NSID 1 from core 1: 8211.68 32.08 1948.04 724.24 6159.32 00:31:29.313 PCIE (0000:00:12.0) NSID 2 from core 1: 8211.68 32.08 1948.02 717.09 5985.56 00:31:29.313 PCIE (0000:00:12.0) NSID 3 from core 1: 8211.68 32.08 1948.07 719.83 5701.17 00:31:29.313 ======================================================== 00:31:29.313 Total : 49270.10 192.46 1947.89 706.48 6159.32 00:31:29.313 00:31:29.572 Initializing NVMe Controllers 00:31:29.572 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:29.572 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:29.572 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:29.572 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:29.572 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:29.572 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:29.572 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:29.572 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:29.572 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:29.572 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:29.572 Initialization complete. Launching workers. 00:31:29.572 ======================================================== 00:31:29.572 Latency(us) 00:31:29.572 Device Information : IOPS MiB/s Average min max 00:31:29.572 PCIE (0000:00:10.0) NSID 1 from core 0: 7563.62 29.55 2113.68 736.49 6122.66 00:31:29.572 PCIE (0000:00:11.0) NSID 1 from core 0: 7563.62 29.55 2114.57 745.24 5636.25 00:31:29.572 PCIE (0000:00:13.0) NSID 1 from core 0: 7563.62 29.55 2114.33 758.80 5703.74 00:31:29.572 PCIE (0000:00:12.0) NSID 1 from core 0: 7563.62 29.55 2114.03 764.45 5471.43 00:31:29.572 PCIE (0000:00:12.0) NSID 2 from core 0: 7563.62 29.55 2113.78 758.52 5445.38 00:31:29.572 PCIE (0000:00:12.0) NSID 3 from core 0: 7563.62 29.55 2113.62 750.34 5694.44 00:31:29.572 ======================================================== 00:31:29.572 Total : 45381.74 177.27 2114.00 736.49 6122.66 00:31:29.572 00:31:31.513 Initializing NVMe Controllers 00:31:31.513 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:31.513 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:31.513 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:31.513 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:31.513 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:31.513 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:31.513 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:31.513 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:31.513 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:31.513 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:31.513 Initialization complete. Launching workers. 00:31:31.513 ======================================================== 00:31:31.513 Latency(us) 00:31:31.513 Device Information : IOPS MiB/s Average min max 00:31:31.513 PCIE (0000:00:10.0) NSID 1 from core 2: 4547.70 17.76 3515.91 728.36 13027.06 00:31:31.513 PCIE (0000:00:11.0) NSID 1 from core 2: 4547.70 17.76 3517.45 748.27 12859.96 00:31:31.513 PCIE (0000:00:13.0) NSID 1 from core 2: 4547.70 17.76 3517.03 715.91 13481.29 00:31:31.513 PCIE (0000:00:12.0) NSID 1 from core 2: 4547.70 17.76 3517.69 678.48 13281.36 00:31:31.513 PCIE (0000:00:12.0) NSID 2 from core 2: 4547.70 17.76 3517.16 650.03 12658.98 00:31:31.513 PCIE (0000:00:12.0) NSID 3 from core 2: 4547.70 17.76 3517.24 602.82 13226.37 00:31:31.513 ======================================================== 00:31:31.513 Total : 27286.18 106.59 3517.08 602.82 13481.29 00:31:31.513 00:31:31.513 ************************************ 00:31:31.513 END TEST nvme_multi_secondary 00:31:31.513 ************************************ 00:31:31.513 15:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63823 00:31:31.513 15:56:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63824 00:31:31.513 00:31:31.513 real 0m10.727s 00:31:31.513 user 0m18.509s 00:31:31.513 sys 0m0.641s 00:31:31.513 15:56:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:31.513 15:56:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:31:31.513 15:56:52 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:31.513 15:56:52 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:31:31.513 15:56:52 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62785 ]] 00:31:31.513 15:56:52 nvme -- common/autotest_common.sh@1092 -- # kill 62785 00:31:31.513 15:56:52 nvme -- common/autotest_common.sh@1093 -- # wait 62785 00:31:31.513 [2024-11-05 15:56:52.746944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.747015] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.747046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.747069] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.749129] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.749183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.749206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.749228] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.751639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.751884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.752019] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.752140] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.754868] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.755022] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.755144] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.513 [2024-11-05 15:56:52.755262] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63701) is not found. Dropping the request. 00:31:31.772 15:56:52 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:31:31.772 15:56:52 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:31:31.772 15:56:52 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:31.772 15:56:52 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:31.772 15:56:52 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:31.772 15:56:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:31.772 ************************************ 00:31:31.772 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:31.772 ************************************ 00:31:31.772 15:56:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:31.772 * Looking for test storage... 00:31:31.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.772 --rc genhtml_branch_coverage=1 00:31:31.772 --rc genhtml_function_coverage=1 00:31:31.772 --rc genhtml_legend=1 00:31:31.772 --rc geninfo_all_blocks=1 00:31:31.772 --rc geninfo_unexecuted_blocks=1 00:31:31.772 00:31:31.772 ' 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.772 --rc genhtml_branch_coverage=1 00:31:31.772 --rc genhtml_function_coverage=1 00:31:31.772 --rc genhtml_legend=1 00:31:31.772 --rc geninfo_all_blocks=1 00:31:31.772 --rc geninfo_unexecuted_blocks=1 00:31:31.772 00:31:31.772 ' 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.772 --rc genhtml_branch_coverage=1 00:31:31.772 --rc genhtml_function_coverage=1 00:31:31.772 --rc genhtml_legend=1 00:31:31.772 --rc geninfo_all_blocks=1 00:31:31.772 --rc geninfo_unexecuted_blocks=1 00:31:31.772 00:31:31.772 ' 00:31:31.772 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.772 --rc genhtml_branch_coverage=1 00:31:31.772 --rc genhtml_function_coverage=1 00:31:31.772 --rc genhtml_legend=1 00:31:31.772 --rc geninfo_all_blocks=1 00:31:31.772 --rc geninfo_unexecuted_blocks=1 00:31:31.772 00:31:31.772 ' 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:31.773 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63986 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63986 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 63986 ']' 00:31:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:32.030 15:56:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:32.030 [2024-11-05 15:56:53.213013] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:31:32.030 [2024-11-05 15:56:53.213108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63986 ] 00:31:32.030 [2024-11-05 15:56:53.371292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.288 [2024-11-05 15:56:53.477630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.288 [2024-11-05 15:56:53.478056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.288 [2024-11-05 15:56:53.478280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.288 [2024-11-05 15:56:53.478300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:32.897 nvme0n1 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WjdCq.txt 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:32.897 true 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730822214 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64009 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:32.897 15:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:35.424 [2024-11-05 15:56:56.220820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:31:35.424 [2024-11-05 15:56:56.221448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:35.424 [2024-11-05 15:56:56.221497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:35.424 [2024-11-05 15:56:56.221511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.424 [2024-11-05 15:56:56.223032] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:31:35.424 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64009 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64009 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64009 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WjdCq.txt 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WjdCq.txt 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63986 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 63986 ']' 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 63986 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63986 00:31:35.424 killing process with pid 63986 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:35.424 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63986' 00:31:35.425 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 63986 00:31:35.425 15:56:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 63986 00:31:36.797 15:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:36.797 15:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:36.797 00:31:36.797 real 0m4.899s 00:31:36.797 user 0m17.528s 00:31:36.797 sys 0m0.489s 00:31:36.797 15:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:36.797 15:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:36.797 ************************************ 00:31:36.797 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:36.797 ************************************ 00:31:36.797 15:56:57 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:36.797 15:56:57 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:36.797 15:56:57 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:36.797 15:56:57 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:36.797 15:56:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:36.797 ************************************ 00:31:36.797 START TEST nvme_fio 00:31:36.797 ************************************ 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:31:36.797 15:56:57 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:36.797 15:56:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:37.055 15:56:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:37.055 15:56:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:37.055 15:56:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:37.055 15:56:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:37.055 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:37.314 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:37.314 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:37.314 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:37.314 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:37.314 15:56:58 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:37.314 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:37.314 fio-3.35 00:31:37.314 Starting 1 thread 00:31:43.889 00:31:43.889 test: (groupid=0, jobs=1): err= 0: pid=64151: Tue Nov 5 15:57:04 2024 00:31:43.889 read: IOPS=23.6k, BW=92.0MiB/s (96.5MB/s)(184MiB/2001msec) 00:31:43.889 slat (usec): min=3, max=100, avg= 4.99, stdev= 2.20 00:31:43.889 clat (usec): min=424, max=8480, avg=2707.94, stdev=793.42 00:31:43.889 lat (usec): min=428, max=8485, avg=2712.93, stdev=794.70 00:31:43.889 clat percentiles (usec): 00:31:43.889 | 1.00th=[ 1450], 5.00th=[ 2073], 10.00th=[ 2311], 20.00th=[ 2409], 00:31:43.889 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:31:43.889 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3359], 95.00th=[ 4555], 00:31:43.889 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 7570], 99.95th=[ 7767], 00:31:43.889 | 99.99th=[ 8225] 00:31:43.889 bw ( KiB/s): min=90880, max=97352, per=100.00%, avg=94477.67, stdev=3296.07, samples=3 00:31:43.889 iops : min=22720, max=24338, avg=23619.33, stdev=823.99, samples=3 00:31:43.889 write: IOPS=23.4k, BW=91.4MiB/s (95.8MB/s)(183MiB/2001msec); 0 zone resets 00:31:43.889 slat (nsec): min=3466, max=46721, avg=5238.76, stdev=2079.07 00:31:43.889 clat (usec): min=399, max=8375, avg=2722.56, stdev=816.87 00:31:43.889 lat (usec): min=404, max=8380, avg=2727.80, stdev=818.15 00:31:43.889 clat percentiles (usec): 00:31:43.889 | 1.00th=[ 1450], 5.00th=[ 2073], 10.00th=[ 2311], 20.00th=[ 2409], 00:31:43.889 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:31:43.889 | 70.00th=[ 2606], 80.00th=[ 2737], 90.00th=[ 3392], 95.00th=[ 4621], 00:31:43.889 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7635], 99.95th=[ 7963], 00:31:43.889 | 99.99th=[ 8225] 00:31:43.889 bw ( KiB/s): min=91600, max=96664, per=100.00%, avg=94453.67, stdev=2592.57, samples=3 00:31:43.889 iops : min=22900, max=24166, avg=23613.33, stdev=648.11, samples=3 00:31:43.889 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.06% 00:31:43.889 lat (msec) : 2=4.10%, 4=88.75%, 10=7.04% 00:31:43.889 cpu : usr=99.35%, sys=0.00%, ctx=15, majf=0, minf=607 00:31:43.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:43.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.889 issued rwts: total=47139,46811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.889 00:31:43.889 Run status group 0 (all jobs): 00:31:43.889 READ: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:31:43.889 WRITE: bw=91.4MiB/s (95.8MB/s), 91.4MiB/s-91.4MiB/s (95.8MB/s-95.8MB/s), io=183MiB (192MB), run=2001-2001msec 00:31:43.889 ----------------------------------------------------- 00:31:43.889 Suppressions used: 00:31:43.889 count bytes template 00:31:43.889 1 32 /usr/src/fio/parse.c 00:31:43.889 1 8 libtcmalloc_minimal.so 00:31:43.889 ----------------------------------------------------- 00:31:43.889 00:31:43.889 15:57:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:43.889 15:57:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:43.889 15:57:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:43.889 15:57:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:43.889 15:57:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:43.889 15:57:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:43.889 15:57:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:43.889 15:57:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:43.889 15:57:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:43.889 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:43.889 fio-3.35 00:31:43.889 Starting 1 thread 00:31:50.439 00:31:50.439 test: (groupid=0, jobs=1): err= 0: pid=64206: Tue Nov 5 15:57:11 2024 00:31:50.439 read: IOPS=23.7k, BW=92.6MiB/s (97.1MB/s)(185MiB/2001msec) 00:31:50.439 slat (nsec): min=3388, max=82384, avg=5014.15, stdev=2217.45 00:31:50.439 clat (usec): min=493, max=8935, avg=2694.89, stdev=800.00 00:31:50.439 lat (usec): min=502, max=8980, avg=2699.91, stdev=801.35 00:31:50.439 clat percentiles (usec): 00:31:50.439 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2343], 20.00th=[ 2409], 00:31:50.439 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:31:50.439 | 70.00th=[ 2573], 80.00th=[ 2638], 90.00th=[ 2999], 95.00th=[ 4752], 00:31:50.439 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7373], 99.95th=[ 7767], 00:31:50.439 | 99.99th=[ 8848] 00:31:50.439 bw ( KiB/s): min=91504, max=95112, per=98.69%, avg=93568.00, stdev=1859.36, samples=3 00:31:50.439 iops : min=22876, max=23778, avg=23392.00, stdev=464.84, samples=3 00:31:50.439 write: IOPS=23.6k, BW=92.0MiB/s (96.5MB/s)(184MiB/2001msec); 0 zone resets 00:31:50.439 slat (nsec): min=3464, max=58334, avg=5248.32, stdev=2127.95 00:31:50.439 clat (usec): min=522, max=8865, avg=2700.34, stdev=800.99 00:31:50.439 lat (usec): min=531, max=8878, avg=2705.59, stdev=802.32 00:31:50.439 clat percentiles (usec): 00:31:50.439 | 1.00th=[ 1729], 5.00th=[ 2147], 10.00th=[ 2343], 20.00th=[ 2409], 00:31:50.439 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:31:50.439 | 70.00th=[ 2573], 80.00th=[ 2638], 90.00th=[ 2999], 95.00th=[ 4752], 00:31:50.439 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7439], 99.95th=[ 7898], 00:31:50.439 | 99.99th=[ 8717] 00:31:50.439 bw ( KiB/s): min=90400, max=96608, per=99.39%, avg=93658.67, stdev=3115.54, samples=3 00:31:50.439 iops : min=22600, max=24152, avg=23414.67, stdev=778.88, samples=3 00:31:50.439 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:31:50.439 lat (msec) : 2=3.01%, 4=89.92%, 10=7.03% 00:31:50.439 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=607 00:31:50.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:50.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.439 issued rwts: total=47431,47139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.439 00:31:50.439 Run status group 0 (all jobs): 00:31:50.439 READ: bw=92.6MiB/s (97.1MB/s), 92.6MiB/s-92.6MiB/s (97.1MB/s-97.1MB/s), io=185MiB (194MB), run=2001-2001msec 00:31:50.439 WRITE: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:31:50.439 ----------------------------------------------------- 00:31:50.439 Suppressions used: 00:31:50.439 count bytes template 00:31:50.439 1 32 /usr/src/fio/parse.c 00:31:50.439 1 8 libtcmalloc_minimal.so 00:31:50.439 ----------------------------------------------------- 00:31:50.439 00:31:50.439 15:57:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:50.439 15:57:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:50.439 15:57:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:50.439 15:57:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:50.696 15:57:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:50.696 15:57:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:50.953 15:57:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:50.953 15:57:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:50.953 15:57:12 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:50.953 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:50.953 fio-3.35 00:31:50.953 Starting 1 thread 00:31:59.059 00:31:59.059 test: (groupid=0, jobs=1): err= 0: pid=64267: Tue Nov 5 15:57:19 2024 00:31:59.059 read: IOPS=24.1k, BW=94.1MiB/s (98.6MB/s)(188MiB/2001msec) 00:31:59.059 slat (usec): min=3, max=392, avg= 4.87, stdev= 2.69 00:31:59.059 clat (usec): min=214, max=10757, avg=2654.88, stdev=742.12 00:31:59.059 lat (usec): min=219, max=10810, avg=2659.75, stdev=743.31 00:31:59.059 clat percentiles (usec): 00:31:59.059 | 1.00th=[ 1467], 5.00th=[ 2073], 10.00th=[ 2278], 20.00th=[ 2376], 00:31:59.059 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:31:59.059 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3097], 95.00th=[ 4359], 00:31:59.059 | 99.00th=[ 5866], 99.50th=[ 6456], 99.90th=[ 7635], 99.95th=[ 8291], 00:31:59.059 | 99.99th=[10552] 00:31:59.059 bw ( KiB/s): min=92800, max=96128, per=98.22%, avg=94613.33, stdev=1683.98, samples=3 00:31:59.059 iops : min=23200, max=24032, avg=23653.33, stdev=421.00, samples=3 00:31:59.059 write: IOPS=23.9k, BW=93.5MiB/s (98.0MB/s)(187MiB/2001msec); 0 zone resets 00:31:59.059 slat (nsec): min=3439, max=74726, avg=5092.95, stdev=1991.88 00:31:59.059 clat (usec): min=265, max=10609, avg=2655.71, stdev=725.66 00:31:59.059 lat (usec): min=269, max=10623, avg=2660.80, stdev=726.85 00:31:59.059 clat percentiles (usec): 00:31:59.059 | 1.00th=[ 1467], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 00:31:59.059 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:31:59.059 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3097], 95.00th=[ 4293], 00:31:59.059 | 99.00th=[ 5866], 99.50th=[ 6325], 99.90th=[ 7635], 99.95th=[ 8717], 00:31:59.059 | 99.99th=[10290] 00:31:59.059 bw ( KiB/s): min=92080, max=95864, per=98.83%, avg=94592.00, stdev=2175.51, samples=3 00:31:59.059 iops : min=23020, max=23966, avg=23648.00, stdev=543.88, samples=3 00:31:59.059 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.10% 00:31:59.059 lat (msec) : 2=4.05%, 4=89.52%, 10=6.28%, 20=0.02% 00:31:59.059 cpu : usr=98.95%, sys=0.20%, ctx=15, majf=0, minf=607 00:31:59.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:59.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:59.059 issued rwts: total=48189,47879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:59.059 00:31:59.059 Run status group 0 (all jobs): 00:31:59.059 READ: bw=94.1MiB/s (98.6MB/s), 94.1MiB/s-94.1MiB/s (98.6MB/s-98.6MB/s), io=188MiB (197MB), run=2001-2001msec 00:31:59.059 WRITE: bw=93.5MiB/s (98.0MB/s), 93.5MiB/s-93.5MiB/s (98.0MB/s-98.0MB/s), io=187MiB (196MB), run=2001-2001msec 00:31:59.059 ----------------------------------------------------- 00:31:59.059 Suppressions used: 00:31:59.059 count bytes template 00:31:59.059 1 32 /usr/src/fio/parse.c 00:31:59.059 1 8 libtcmalloc_minimal.so 00:31:59.059 ----------------------------------------------------- 00:31:59.059 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:59.059 15:57:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:59.059 15:57:19 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:59.059 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:59.059 fio-3.35 00:31:59.059 Starting 1 thread 00:32:09.025 00:32:09.025 test: (groupid=0, jobs=1): err= 0: pid=64329: Tue Nov 5 15:57:28 2024 00:32:09.025 read: IOPS=23.4k, BW=91.3MiB/s (95.7MB/s)(183MiB/2001msec) 00:32:09.025 slat (nsec): min=3349, max=80615, avg=5111.76, stdev=2502.68 00:32:09.025 clat (usec): min=390, max=7725, avg=2737.31, stdev=883.29 00:32:09.025 lat (usec): min=398, max=7732, avg=2742.43, stdev=884.83 00:32:09.025 clat percentiles (usec): 00:32:09.025 | 1.00th=[ 1401], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2376], 00:32:09.025 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:32:09.025 | 70.00th=[ 2573], 80.00th=[ 2769], 90.00th=[ 3720], 95.00th=[ 4883], 00:32:09.025 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7439], 00:32:09.025 | 99.99th=[ 7635] 00:32:09.025 bw ( KiB/s): min=89128, max=94968, per=99.41%, avg=92941.33, stdev=3304.62, samples=3 00:32:09.025 iops : min=22282, max=23742, avg=23235.33, stdev=826.16, samples=3 00:32:09.025 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(182MiB/2001msec); 0 zone resets 00:32:09.025 slat (nsec): min=3413, max=93554, avg=5312.94, stdev=2428.67 00:32:09.025 clat (usec): min=472, max=7670, avg=2732.88, stdev=875.91 00:32:09.025 lat (usec): min=480, max=7712, avg=2738.20, stdev=877.40 00:32:09.025 clat percentiles (usec): 00:32:09.025 | 1.00th=[ 1434], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2376], 00:32:09.025 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:32:09.025 | 70.00th=[ 2573], 80.00th=[ 2769], 90.00th=[ 3654], 95.00th=[ 4883], 00:32:09.025 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7373], 00:32:09.025 | 99.99th=[ 7570] 00:32:09.025 bw ( KiB/s): min=88664, max=96424, per=100.00%, avg=93061.33, stdev=3982.12, samples=3 00:32:09.025 iops : min=22166, max=24106, avg=23265.33, stdev=995.53, samples=3 00:32:09.025 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.11% 00:32:09.025 lat (msec) : 2=4.27%, 4=87.49%, 10=8.12% 00:32:09.025 cpu : usr=99.25%, sys=0.00%, ctx=3, majf=0, minf=605 00:32:09.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:09.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.025 issued rwts: total=46772,46475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.025 00:32:09.025 Run status group 0 (all jobs): 00:32:09.025 READ: bw=91.3MiB/s (95.7MB/s), 91.3MiB/s-91.3MiB/s (95.7MB/s-95.7MB/s), io=183MiB (192MB), run=2001-2001msec 00:32:09.025 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=182MiB (190MB), run=2001-2001msec 00:32:09.025 ----------------------------------------------------- 00:32:09.025 Suppressions used: 00:32:09.025 count bytes template 00:32:09.025 1 32 /usr/src/fio/parse.c 00:32:09.025 1 8 libtcmalloc_minimal.so 00:32:09.025 ----------------------------------------------------- 00:32:09.025 00:32:09.025 ************************************ 00:32:09.025 END TEST nvme_fio 00:32:09.025 ************************************ 00:32:09.025 15:57:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:09.025 15:57:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:32:09.025 00:32:09.025 real 0m31.268s 00:32:09.025 user 0m18.590s 00:32:09.025 sys 0m23.356s 00:32:09.025 15:57:29 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:09.025 15:57:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:32:09.025 ************************************ 00:32:09.025 END TEST nvme 00:32:09.025 ************************************ 00:32:09.025 00:32:09.025 real 1m40.597s 00:32:09.025 user 3m40.266s 00:32:09.025 sys 0m33.784s 00:32:09.025 15:57:29 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:09.025 15:57:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:09.025 15:57:29 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:32:09.025 15:57:29 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:09.025 15:57:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:09.025 15:57:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:09.025 15:57:29 -- common/autotest_common.sh@10 -- # set +x 00:32:09.025 ************************************ 00:32:09.025 START TEST nvme_scc 00:32:09.025 ************************************ 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:09.025 * Looking for test storage... 00:32:09.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@345 -- # : 1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@368 -- # return 0 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:09.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.025 --rc genhtml_branch_coverage=1 00:32:09.025 --rc genhtml_function_coverage=1 00:32:09.025 --rc genhtml_legend=1 00:32:09.025 --rc geninfo_all_blocks=1 00:32:09.025 --rc geninfo_unexecuted_blocks=1 00:32:09.025 00:32:09.025 ' 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:09.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.025 --rc genhtml_branch_coverage=1 00:32:09.025 --rc genhtml_function_coverage=1 00:32:09.025 --rc genhtml_legend=1 00:32:09.025 --rc geninfo_all_blocks=1 00:32:09.025 --rc geninfo_unexecuted_blocks=1 00:32:09.025 00:32:09.025 ' 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:09.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.025 --rc genhtml_branch_coverage=1 00:32:09.025 --rc genhtml_function_coverage=1 00:32:09.025 --rc genhtml_legend=1 00:32:09.025 --rc geninfo_all_blocks=1 00:32:09.025 --rc geninfo_unexecuted_blocks=1 00:32:09.025 00:32:09.025 ' 00:32:09.025 15:57:29 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:09.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.025 --rc genhtml_branch_coverage=1 00:32:09.025 --rc genhtml_function_coverage=1 00:32:09.025 --rc genhtml_legend=1 00:32:09.025 --rc geninfo_all_blocks=1 00:32:09.025 --rc geninfo_unexecuted_blocks=1 00:32:09.025 00:32:09.025 ' 00:32:09.025 15:57:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:09.025 15:57:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:09.025 15:57:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:09.025 15:57:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:09.025 15:57:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.025 15:57:29 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.026 15:57:29 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.026 15:57:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.026 15:57:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.026 15:57:29 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.026 15:57:29 nvme_scc -- paths/export.sh@5 -- # export PATH 00:32:09.026 15:57:29 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:09.026 15:57:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:32:09.026 15:57:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:09.026 15:57:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:32:09.026 15:57:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:09.026 15:57:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:09.026 15:57:29 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:09.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:09.026 Waiting for block devices as requested 00:32:09.026 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:09.026 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:09.026 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:09.026 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.289 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:14.289 15:57:35 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:14.289 15:57:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.289 15:57:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:14.289 15:57:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.289 15:57:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.289 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.290 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:14.291 15:57:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.291 15:57:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:14.291 15:57:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.291 15:57:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:14.291 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.292 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:14.293 15:57:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.293 15:57:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:14.293 15:57:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.293 15:57:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.293 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.294 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.295 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.296 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:14.297 15:57:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.297 15:57:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:14.297 15:57:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.297 15:57:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.297 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:32:14.298 15:57:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:32:14.298 15:57:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:32:14.298 15:57:35 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:14.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:14.881 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:14.881 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:14.881 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:14.881 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:15.138 15:57:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:15.138 15:57:36 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:15.138 15:57:36 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:15.138 15:57:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:15.138 ************************************ 00:32:15.138 START TEST nvme_simple_copy 00:32:15.138 ************************************ 00:32:15.138 15:57:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:15.138 Initializing NVMe Controllers 00:32:15.138 Attaching to 0000:00:10.0 00:32:15.138 Controller supports SCC. Attached to 0000:00:10.0 00:32:15.138 Namespace ID: 1 size: 6GB 00:32:15.138 Initialization complete. 00:32:15.138 00:32:15.138 Controller QEMU NVMe Ctrl (12340 ) 00:32:15.138 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:15.138 Namespace Block Size:4096 00:32:15.138 Writing LBAs 0 to 63 with Random Data 00:32:15.138 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:15.138 LBAs matching Written Data: 64 00:32:15.395 00:32:15.395 real 0m0.247s 00:32:15.395 user 0m0.091s 00:32:15.395 sys 0m0.055s 00:32:15.395 15:57:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:15.395 15:57:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:32:15.395 ************************************ 00:32:15.395 END TEST nvme_simple_copy 00:32:15.395 ************************************ 00:32:15.395 00:32:15.395 real 0m7.314s 00:32:15.395 user 0m0.911s 00:32:15.395 sys 0m1.201s 00:32:15.395 15:57:36 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:15.395 15:57:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:15.395 ************************************ 00:32:15.395 END TEST nvme_scc 00:32:15.395 ************************************ 00:32:15.395 15:57:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:32:15.395 15:57:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:32:15.395 15:57:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:32:15.395 15:57:36 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:32:15.395 15:57:36 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:32:15.395 15:57:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:15.395 15:57:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:15.395 15:57:36 -- common/autotest_common.sh@10 -- # set +x 00:32:15.395 ************************************ 00:32:15.395 START TEST nvme_fdp 00:32:15.395 ************************************ 00:32:15.395 15:57:36 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:32:15.395 * Looking for test storage... 00:32:15.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:15.395 15:57:36 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:15.395 15:57:36 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:32:15.395 15:57:36 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:15.395 15:57:36 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:32:15.395 15:57:36 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:32:15.396 15:57:36 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:15.396 15:57:36 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:15.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.396 --rc genhtml_branch_coverage=1 00:32:15.396 --rc genhtml_function_coverage=1 00:32:15.396 --rc genhtml_legend=1 00:32:15.396 --rc geninfo_all_blocks=1 00:32:15.396 --rc geninfo_unexecuted_blocks=1 00:32:15.396 00:32:15.396 ' 00:32:15.396 15:57:36 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:15.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.396 --rc genhtml_branch_coverage=1 00:32:15.396 --rc genhtml_function_coverage=1 00:32:15.396 --rc genhtml_legend=1 00:32:15.396 --rc geninfo_all_blocks=1 00:32:15.396 --rc geninfo_unexecuted_blocks=1 00:32:15.396 00:32:15.396 ' 00:32:15.396 15:57:36 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:15.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.396 --rc genhtml_branch_coverage=1 00:32:15.396 --rc genhtml_function_coverage=1 00:32:15.396 --rc genhtml_legend=1 00:32:15.396 --rc geninfo_all_blocks=1 00:32:15.396 --rc geninfo_unexecuted_blocks=1 00:32:15.396 00:32:15.396 ' 00:32:15.396 15:57:36 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:15.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:15.396 --rc genhtml_branch_coverage=1 00:32:15.396 --rc genhtml_function_coverage=1 00:32:15.396 --rc genhtml_legend=1 00:32:15.396 --rc geninfo_all_blocks=1 00:32:15.396 --rc geninfo_unexecuted_blocks=1 00:32:15.396 00:32:15.396 ' 00:32:15.396 15:57:36 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:15.396 15:57:36 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:15.396 15:57:36 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:15.396 15:57:36 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:15.396 15:57:36 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.396 15:57:36 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.396 15:57:36 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.396 15:57:36 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.397 15:57:36 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.397 15:57:36 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:32:15.397 15:57:36 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:15.397 15:57:36 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:32:15.397 15:57:36 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:15.397 15:57:36 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:15.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:15.911 Waiting for block devices as requested 00:32:15.912 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:15.912 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:16.170 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:16.170 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:21.438 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:21.438 15:57:42 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:32:21.438 15:57:42 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:21.438 15:57:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:21.438 15:57:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:21.438 15:57:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:21.439 15:57:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:21.439 15:57:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:21.439 15:57:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:21.439 15:57:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:21.439 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:21.440 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:21.441 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:21.442 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:21.443 15:57:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:21.443 15:57:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:21.443 15:57:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:21.444 15:57:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:21.444 15:57:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:21.444 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:21.445 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.446 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:21.447 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:21.448 15:57:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:21.448 15:57:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:21.448 15:57:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:21.448 15:57:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.448 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.449 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.450 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.451 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.452 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:21.453 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:21.454 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.455 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:21.456 15:57:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:21.457 15:57:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:21.457 15:57:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:21.457 15:57:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:21.457 15:57:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.457 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:21.458 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:21.459 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:32:21.460 15:57:42 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:32:21.460 15:57:42 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:32:21.460 15:57:42 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:21.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:22.285 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:22.285 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:22.285 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:22.285 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:22.285 15:57:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:22.285 15:57:43 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:22.285 15:57:43 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:22.285 15:57:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:22.285 ************************************ 00:32:22.285 START TEST nvme_flexible_data_placement 00:32:22.285 ************************************ 00:32:22.286 15:57:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:22.543 Initializing NVMe Controllers 00:32:22.543 Attaching to 0000:00:13.0 00:32:22.543 Controller supports FDP Attached to 0000:00:13.0 00:32:22.543 Namespace ID: 1 Endurance Group ID: 1 00:32:22.543 Initialization complete. 00:32:22.543 00:32:22.543 ================================== 00:32:22.543 == FDP tests for Namespace: #01 == 00:32:22.543 ================================== 00:32:22.543 00:32:22.543 Get Feature: FDP: 00:32:22.543 ================= 00:32:22.543 Enabled: Yes 00:32:22.543 FDP configuration Index: 0 00:32:22.543 00:32:22.543 FDP configurations log page 00:32:22.543 =========================== 00:32:22.543 Number of FDP configurations: 1 00:32:22.543 Version: 0 00:32:22.543 Size: 112 00:32:22.543 FDP Configuration Descriptor: 0 00:32:22.543 Descriptor Size: 96 00:32:22.543 Reclaim Group Identifier format: 2 00:32:22.543 FDP Volatile Write Cache: Not Present 00:32:22.543 FDP Configuration: Valid 00:32:22.543 Vendor Specific Size: 0 00:32:22.543 Number of Reclaim Groups: 2 00:32:22.543 Number of Recalim Unit Handles: 8 00:32:22.543 Max Placement Identifiers: 128 00:32:22.543 Number of Namespaces Suppprted: 256 00:32:22.543 Reclaim unit Nominal Size: 6000000 bytes 00:32:22.543 Estimated Reclaim Unit Time Limit: Not Reported 00:32:22.543 RUH Desc #000: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #001: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #002: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #003: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #004: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #005: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #006: RUH Type: Initially Isolated 00:32:22.543 RUH Desc #007: RUH Type: Initially Isolated 00:32:22.543 00:32:22.543 FDP reclaim unit handle usage log page 00:32:22.543 ====================================== 00:32:22.543 Number of Reclaim Unit Handles: 8 00:32:22.543 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:22.543 RUH Usage Desc #001: RUH Attributes: Unused 00:32:22.543 RUH Usage Desc #002: RUH Attributes: Unused 00:32:22.543 RUH Usage Desc #003: RUH Attributes: Unused 00:32:22.543 RUH Usage Desc #004: RUH Attributes: Unused 00:32:22.543 RUH Usage Desc #005: RUH Attributes: Unused 00:32:22.543 RUH Usage Desc #006: RUH Attributes: Unused 00:32:22.543 RUH Usage Desc #007: RUH Attributes: Unused 00:32:22.543 00:32:22.543 FDP statistics log page 00:32:22.543 ======================= 00:32:22.543 Host bytes with metadata written: 1091264512 00:32:22.543 Media bytes with metadata written: 1091432448 00:32:22.543 Media bytes erased: 0 00:32:22.543 00:32:22.543 FDP Reclaim unit handle status 00:32:22.543 ============================== 00:32:22.543 Number of RUHS descriptors: 2 00:32:22.543 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000f4a 00:32:22.543 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:32:22.543 00:32:22.543 FDP write on placement id: 0 success 00:32:22.543 00:32:22.543 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:32:22.543 00:32:22.543 IO mgmt send: RUH update for Placement ID: #0 Success 00:32:22.543 00:32:22.543 Get Feature: FDP Events for Placement handle: #0 00:32:22.543 ======================== 00:32:22.543 Number of FDP Events: 6 00:32:22.543 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:32:22.543 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:32:22.543 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:32:22.543 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:32:22.543 FDP Event: #4 Type: Media Reallocated Enabled: No 00:32:22.543 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:32:22.543 00:32:22.543 FDP events log page 00:32:22.543 =================== 00:32:22.543 Number of FDP events: 1 00:32:22.543 FDP Event #0: 00:32:22.543 Event Type: RU Not Written to Capacity 00:32:22.543 Placement Identifier: Valid 00:32:22.543 NSID: Valid 00:32:22.543 Location: Valid 00:32:22.543 Placement Identifier: 0 00:32:22.543 Event Timestamp: 5 00:32:22.543 Namespace Identifier: 1 00:32:22.543 Reclaim Group Identifier: 0 00:32:22.543 Reclaim Unit Handle Identifier: 0 00:32:22.543 00:32:22.543 FDP test passed 00:32:22.543 00:32:22.543 real 0m0.226s 00:32:22.543 user 0m0.072s 00:32:22.543 sys 0m0.053s 00:32:22.543 15:57:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:22.543 ************************************ 00:32:22.543 END TEST nvme_flexible_data_placement 00:32:22.543 15:57:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:32:22.543 ************************************ 00:32:22.543 00:32:22.543 real 0m7.324s 00:32:22.543 user 0m0.947s 00:32:22.543 sys 0m1.336s 00:32:22.543 15:57:43 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:22.543 15:57:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:22.543 ************************************ 00:32:22.543 END TEST nvme_fdp 00:32:22.543 ************************************ 00:32:22.802 15:57:43 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:32:22.802 15:57:43 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:22.802 15:57:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:22.802 15:57:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:22.802 15:57:43 -- common/autotest_common.sh@10 -- # set +x 00:32:22.802 ************************************ 00:32:22.802 START TEST nvme_rpc 00:32:22.802 ************************************ 00:32:22.802 15:57:43 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:22.802 * Looking for test storage... 00:32:22.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:22.802 15:57:43 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.802 15:57:44 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:22.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.802 --rc genhtml_branch_coverage=1 00:32:22.802 --rc genhtml_function_coverage=1 00:32:22.802 --rc genhtml_legend=1 00:32:22.802 --rc geninfo_all_blocks=1 00:32:22.802 --rc geninfo_unexecuted_blocks=1 00:32:22.802 00:32:22.802 ' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:22.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.802 --rc genhtml_branch_coverage=1 00:32:22.802 --rc genhtml_function_coverage=1 00:32:22.802 --rc genhtml_legend=1 00:32:22.802 --rc geninfo_all_blocks=1 00:32:22.802 --rc geninfo_unexecuted_blocks=1 00:32:22.802 00:32:22.802 ' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:22.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.802 --rc genhtml_branch_coverage=1 00:32:22.802 --rc genhtml_function_coverage=1 00:32:22.802 --rc genhtml_legend=1 00:32:22.802 --rc geninfo_all_blocks=1 00:32:22.802 --rc geninfo_unexecuted_blocks=1 00:32:22.802 00:32:22.802 ' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:22.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.802 --rc genhtml_branch_coverage=1 00:32:22.802 --rc genhtml_function_coverage=1 00:32:22.802 --rc genhtml_legend=1 00:32:22.802 --rc geninfo_all_blocks=1 00:32:22.802 --rc geninfo_unexecuted_blocks=1 00:32:22.802 00:32:22.802 ' 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65680 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:22.802 15:57:44 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65680 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65680 ']' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:22.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:22.802 15:57:44 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:23.060 [2024-11-05 15:57:44.201367] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:32:23.060 [2024-11-05 15:57:44.201505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65680 ] 00:32:23.060 [2024-11-05 15:57:44.359880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:23.317 [2024-11-05 15:57:44.462645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.317 [2024-11-05 15:57:44.462659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.881 15:57:45 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:23.881 15:57:45 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:32:23.881 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:32:24.139 Nvme0n1 00:32:24.139 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:24.139 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:24.397 request: 00:32:24.397 { 00:32:24.397 "bdev_name": "Nvme0n1", 00:32:24.397 "filename": "non_existing_file", 00:32:24.397 "method": "bdev_nvme_apply_firmware", 00:32:24.397 "req_id": 1 00:32:24.397 } 00:32:24.397 Got JSON-RPC error response 00:32:24.397 response: 00:32:24.397 { 00:32:24.397 "code": -32603, 00:32:24.397 "message": "open file failed." 00:32:24.397 } 00:32:24.397 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:24.397 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:24.397 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:24.657 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:24.657 15:57:45 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65680 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65680 ']' 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65680 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65680 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:24.657 killing process with pid 65680 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65680' 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65680 00:32:24.657 15:57:45 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65680 00:32:26.027 00:32:26.027 real 0m3.346s 00:32:26.027 user 0m6.497s 00:32:26.027 sys 0m0.496s 00:32:26.027 15:57:47 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:26.027 ************************************ 00:32:26.027 END TEST nvme_rpc 00:32:26.027 ************************************ 00:32:26.027 15:57:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:26.027 15:57:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:26.027 15:57:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:26.027 15:57:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:26.027 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:32:26.027 ************************************ 00:32:26.027 START TEST nvme_rpc_timeouts 00:32:26.027 ************************************ 00:32:26.027 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:26.027 * Looking for test storage... 00:32:26.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:26.027 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:26.027 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:32:26.027 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:26.284 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:26.284 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.284 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.284 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.284 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.285 15:57:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.285 --rc genhtml_branch_coverage=1 00:32:26.285 --rc genhtml_function_coverage=1 00:32:26.285 --rc genhtml_legend=1 00:32:26.285 --rc geninfo_all_blocks=1 00:32:26.285 --rc geninfo_unexecuted_blocks=1 00:32:26.285 00:32:26.285 ' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.285 --rc genhtml_branch_coverage=1 00:32:26.285 --rc genhtml_function_coverage=1 00:32:26.285 --rc genhtml_legend=1 00:32:26.285 --rc geninfo_all_blocks=1 00:32:26.285 --rc geninfo_unexecuted_blocks=1 00:32:26.285 00:32:26.285 ' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.285 --rc genhtml_branch_coverage=1 00:32:26.285 --rc genhtml_function_coverage=1 00:32:26.285 --rc genhtml_legend=1 00:32:26.285 --rc geninfo_all_blocks=1 00:32:26.285 --rc geninfo_unexecuted_blocks=1 00:32:26.285 00:32:26.285 ' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.285 --rc genhtml_branch_coverage=1 00:32:26.285 --rc genhtml_function_coverage=1 00:32:26.285 --rc genhtml_legend=1 00:32:26.285 --rc geninfo_all_blocks=1 00:32:26.285 --rc geninfo_unexecuted_blocks=1 00:32:26.285 00:32:26.285 ' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65751 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65751 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65783 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65783 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65783 ']' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.285 15:57:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:26.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:26.285 15:57:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:26.285 [2024-11-05 15:57:47.520479] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:32:26.285 [2024-11-05 15:57:47.520572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65783 ] 00:32:26.542 [2024-11-05 15:57:47.671305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:26.542 [2024-11-05 15:57:47.767958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.542 [2024-11-05 15:57:47.767976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.106 Checking default timeout settings: 00:32:27.106 15:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:27.106 15:57:48 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:32:27.106 15:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:27.106 15:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:27.364 Making settings changes with rpc: 00:32:27.364 15:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:27.364 15:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:27.621 Check default vs. modified settings: 00:32:27.621 15:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:27.621 15:57:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65751 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:27.878 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65751 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:27.879 Setting action_on_timeout is changed as expected. 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65751 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65751 00:32:27.879 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:28.137 Setting timeout_us is changed as expected. 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65751 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65751 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:28.137 Setting timeout_admin_us is changed as expected. 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65751 /tmp/settings_modified_65751 00:32:28.137 15:57:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65783 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65783 ']' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65783 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65783 00:32:28.137 killing process with pid 65783 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65783' 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65783 00:32:28.137 15:57:49 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65783 00:32:29.510 RPC TIMEOUT SETTING TEST PASSED. 00:32:29.510 15:57:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:29.510 00:32:29.510 real 0m3.164s 00:32:29.510 user 0m6.209s 00:32:29.510 sys 0m0.457s 00:32:29.510 15:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:29.510 15:57:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:29.510 ************************************ 00:32:29.510 END TEST nvme_rpc_timeouts 00:32:29.510 ************************************ 00:32:29.510 15:57:50 -- spdk/autotest.sh@239 -- # uname -s 00:32:29.510 15:57:50 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:32:29.510 15:57:50 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:29.510 15:57:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:29.510 15:57:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:29.510 15:57:50 -- common/autotest_common.sh@10 -- # set +x 00:32:29.510 ************************************ 00:32:29.510 START TEST sw_hotplug 00:32:29.510 ************************************ 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:29.510 * Looking for test storage... 00:32:29.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:29.510 15:57:50 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:29.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.510 --rc genhtml_branch_coverage=1 00:32:29.510 --rc genhtml_function_coverage=1 00:32:29.510 --rc genhtml_legend=1 00:32:29.510 --rc geninfo_all_blocks=1 00:32:29.510 --rc geninfo_unexecuted_blocks=1 00:32:29.510 00:32:29.510 ' 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:29.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.510 --rc genhtml_branch_coverage=1 00:32:29.510 --rc genhtml_function_coverage=1 00:32:29.510 --rc genhtml_legend=1 00:32:29.510 --rc geninfo_all_blocks=1 00:32:29.510 --rc geninfo_unexecuted_blocks=1 00:32:29.510 00:32:29.510 ' 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:29.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.510 --rc genhtml_branch_coverage=1 00:32:29.510 --rc genhtml_function_coverage=1 00:32:29.510 --rc genhtml_legend=1 00:32:29.510 --rc geninfo_all_blocks=1 00:32:29.510 --rc geninfo_unexecuted_blocks=1 00:32:29.510 00:32:29.510 ' 00:32:29.510 15:57:50 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:29.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:29.510 --rc genhtml_branch_coverage=1 00:32:29.510 --rc genhtml_function_coverage=1 00:32:29.510 --rc genhtml_legend=1 00:32:29.510 --rc geninfo_all_blocks=1 00:32:29.510 --rc geninfo_unexecuted_blocks=1 00:32:29.510 00:32:29.510 ' 00:32:29.510 15:57:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:29.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:29.768 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:29.768 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:29.768 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:29.768 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:29.768 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:32:29.768 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:32:29.768 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:32:29.768 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@233 -- # local class 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:32:29.768 15:57:51 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:29.769 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:32:29.769 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:32:29.769 15:57:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:30.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:30.283 Waiting for block devices as requested 00:32:30.283 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:30.283 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:30.539 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:30.539 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.851 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:35.851 15:57:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:32:35.851 15:57:56 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:35.851 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:32:35.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:35.851 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:32:36.108 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:32:36.365 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:36.365 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:32:36.365 15:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66635 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:32:36.365 15:57:57 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:32:36.365 15:57:57 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:32:36.365 15:57:57 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:32:36.365 15:57:57 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:32:36.365 15:57:57 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:32:36.365 15:57:57 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:32:36.621 Initializing NVMe Controllers 00:32:36.621 Attaching to 0000:00:10.0 00:32:36.621 Attaching to 0000:00:11.0 00:32:36.621 Attached to 0000:00:10.0 00:32:36.621 Attached to 0000:00:11.0 00:32:36.621 Initialization complete. Starting I/O... 00:32:36.621 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:32:36.621 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:32:36.621 00:32:37.550 QEMU NVMe Ctrl (12340 ): 2841 I/Os completed (+2841) 00:32:37.550 QEMU NVMe Ctrl (12341 ): 2943 I/Os completed (+2943) 00:32:37.550 00:32:38.918 QEMU NVMe Ctrl (12340 ): 6410 I/Os completed (+3569) 00:32:38.918 QEMU NVMe Ctrl (12341 ): 6483 I/Os completed (+3540) 00:32:38.918 00:32:39.848 QEMU NVMe Ctrl (12340 ): 10015 I/Os completed (+3605) 00:32:39.848 QEMU NVMe Ctrl (12341 ): 10024 I/Os completed (+3541) 00:32:39.848 00:32:40.778 QEMU NVMe Ctrl (12340 ): 13543 I/Os completed (+3528) 00:32:40.778 QEMU NVMe Ctrl (12341 ): 13529 I/Os completed (+3505) 00:32:40.778 00:32:41.711 QEMU NVMe Ctrl (12340 ): 16720 I/Os completed (+3177) 00:32:41.711 QEMU NVMe Ctrl (12341 ): 16583 I/Os completed (+3054) 00:32:41.711 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:42.642 [2024-11-05 15:58:03.708305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:32:42.642 Controller removed: QEMU NVMe Ctrl (12340 ) 00:32:42.642 [2024-11-05 15:58:03.709517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.709568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.709585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.709601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:42.642 [2024-11-05 15:58:03.711518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.711568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.711585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.711599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:42.642 [2024-11-05 15:58:03.735721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:32:42.642 Controller removed: QEMU NVMe Ctrl (12341 ) 00:32:42.642 [2024-11-05 15:58:03.736810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.736853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.736873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.736888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:42.642 [2024-11-05 15:58:03.738564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.738600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.738616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 [2024-11-05 15:58:03.738628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:32:42.642 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:32:42.642 EAL: Scan for (pci) bus failed. 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:32:42.642 Attaching to 0000:00:10.0 00:32:42.642 Attached to 0000:00:10.0 00:32:42.642 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:32:42.642 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:42.642 15:58:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:32:42.642 Attaching to 0000:00:11.0 00:32:42.642 Attached to 0000:00:11.0 00:32:43.573 QEMU NVMe Ctrl (12340 ): 3621 I/Os completed (+3621) 00:32:43.573 QEMU NVMe Ctrl (12341 ): 3441 I/Os completed (+3441) 00:32:43.573 00:32:44.944 QEMU NVMe Ctrl (12340 ): 7645 I/Os completed (+4024) 00:32:44.944 QEMU NVMe Ctrl (12341 ): 7714 I/Os completed (+4273) 00:32:44.944 00:32:45.876 QEMU NVMe Ctrl (12340 ): 11271 I/Os completed (+3626) 00:32:45.876 QEMU NVMe Ctrl (12341 ): 11253 I/Os completed (+3539) 00:32:45.876 00:32:46.807 QEMU NVMe Ctrl (12340 ): 15026 I/Os completed (+3755) 00:32:46.807 QEMU NVMe Ctrl (12341 ): 14789 I/Os completed (+3536) 00:32:46.807 00:32:47.760 QEMU NVMe Ctrl (12340 ): 18617 I/Os completed (+3591) 00:32:47.761 QEMU NVMe Ctrl (12341 ): 18209 I/Os completed (+3420) 00:32:47.761 00:32:48.693 QEMU NVMe Ctrl (12340 ): 22307 I/Os completed (+3690) 00:32:48.693 QEMU NVMe Ctrl (12341 ): 22138 I/Os completed (+3929) 00:32:48.693 00:32:49.624 QEMU NVMe Ctrl (12340 ): 25998 I/Os completed (+3691) 00:32:49.624 QEMU NVMe Ctrl (12341 ): 25752 I/Os completed (+3614) 00:32:49.624 00:32:50.556 QEMU NVMe Ctrl (12340 ): 29699 I/Os completed (+3701) 00:32:50.556 QEMU NVMe Ctrl (12341 ): 29455 I/Os completed (+3703) 00:32:50.556 00:32:51.927 QEMU NVMe Ctrl (12340 ): 33358 I/Os completed (+3659) 00:32:51.927 QEMU NVMe Ctrl (12341 ): 33201 I/Os completed (+3746) 00:32:51.927 00:32:52.860 QEMU NVMe Ctrl (12340 ): 37096 I/Os completed (+3738) 00:32:52.860 QEMU NVMe Ctrl (12341 ): 36885 I/Os completed (+3684) 00:32:52.860 00:32:53.792 QEMU NVMe Ctrl (12340 ): 40464 I/Os completed (+3368) 00:32:53.792 QEMU NVMe Ctrl (12341 ): 40165 I/Os completed (+3280) 00:32:53.792 00:32:54.751 QEMU NVMe Ctrl (12340 ): 44119 I/Os completed (+3655) 00:32:54.751 QEMU NVMe Ctrl (12341 ): 43662 I/Os completed (+3497) 00:32:54.751 00:32:54.751 15:58:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:32:54.751 15:58:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:32:54.751 15:58:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:54.751 15:58:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:54.751 [2024-11-05 15:58:15.988656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:32:54.751 Controller removed: QEMU NVMe Ctrl (12340 ) 00:32:54.751 [2024-11-05 15:58:15.989833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:15.989884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:15.989903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:15.989919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:54.751 [2024-11-05 15:58:15.991859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:15.991903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:15.991917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:15.991933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:54.751 [2024-11-05 15:58:16.015479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:32:54.751 Controller removed: QEMU NVMe Ctrl (12341 ) 00:32:54.751 [2024-11-05 15:58:16.016534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:16.016575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:16.016596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:16.016611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:54.751 [2024-11-05 15:58:16.018290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:16.018330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:16.018344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 [2024-11-05 15:58:16.018359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:54.751 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:32:55.008 Attaching to 0000:00:10.0 00:32:55.008 Attached to 0000:00:10.0 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:55.008 15:58:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:32:55.008 Attaching to 0000:00:11.0 00:32:55.008 Attached to 0000:00:11.0 00:32:55.573 QEMU NVMe Ctrl (12340 ): 2207 I/Os completed (+2207) 00:32:55.573 QEMU NVMe Ctrl (12341 ): 1972 I/Os completed (+1972) 00:32:55.573 00:32:56.945 QEMU NVMe Ctrl (12340 ): 5856 I/Os completed (+3649) 00:32:56.945 QEMU NVMe Ctrl (12341 ): 5700 I/Os completed (+3728) 00:32:56.945 00:32:57.876 QEMU NVMe Ctrl (12340 ): 9505 I/Os completed (+3649) 00:32:57.876 QEMU NVMe Ctrl (12341 ): 9355 I/Os completed (+3655) 00:32:57.876 00:32:58.807 QEMU NVMe Ctrl (12340 ): 12914 I/Os completed (+3409) 00:32:58.807 QEMU NVMe Ctrl (12341 ): 12670 I/Os completed (+3315) 00:32:58.807 00:32:59.738 QEMU NVMe Ctrl (12340 ): 16124 I/Os completed (+3210) 00:32:59.738 QEMU NVMe Ctrl (12341 ): 15733 I/Os completed (+3063) 00:32:59.738 00:33:00.670 QEMU NVMe Ctrl (12340 ): 19600 I/Os completed (+3476) 00:33:00.670 QEMU NVMe Ctrl (12341 ): 19169 I/Os completed (+3436) 00:33:00.670 00:33:01.603 QEMU NVMe Ctrl (12340 ): 23121 I/Os completed (+3521) 00:33:01.603 QEMU NVMe Ctrl (12341 ): 22646 I/Os completed (+3477) 00:33:01.603 00:33:02.975 QEMU NVMe Ctrl (12340 ): 26235 I/Os completed (+3114) 00:33:02.975 QEMU NVMe Ctrl (12341 ): 25691 I/Os completed (+3045) 00:33:02.975 00:33:03.540 QEMU NVMe Ctrl (12340 ): 29548 I/Os completed (+3313) 00:33:03.540 QEMU NVMe Ctrl (12341 ): 28715 I/Os completed (+3024) 00:33:03.540 00:33:04.911 QEMU NVMe Ctrl (12340 ): 32780 I/Os completed (+3232) 00:33:04.911 QEMU NVMe Ctrl (12341 ): 31894 I/Os completed (+3179) 00:33:04.911 00:33:05.844 QEMU NVMe Ctrl (12340 ): 35885 I/Os completed (+3105) 00:33:05.844 QEMU NVMe Ctrl (12341 ): 34937 I/Os completed (+3043) 00:33:05.844 00:33:06.776 QEMU NVMe Ctrl (12340 ): 39380 I/Os completed (+3495) 00:33:06.776 QEMU NVMe Ctrl (12341 ): 38414 I/Os completed (+3477) 00:33:06.776 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:07.033 [2024-11-05 15:58:28.280291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:07.033 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:07.033 [2024-11-05 15:58:28.281274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.281311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.281325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.281339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:07.033 [2024-11-05 15:58:28.282991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.283029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.283041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.283053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:07.033 [2024-11-05 15:58:28.302951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:07.033 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:07.033 [2024-11-05 15:58:28.303831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.303866] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.303882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.303895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:07.033 [2024-11-05 15:58:28.305491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.305525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.305541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 [2024-11-05 15:58:28.305552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:07.033 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:07.033 EAL: Scan for (pci) bus failed. 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:07.033 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:07.291 Attaching to 0000:00:10.0 00:33:07.291 Attached to 0000:00:10.0 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:07.291 15:58:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:07.291 Attaching to 0000:00:11.0 00:33:07.291 Attached to 0000:00:11.0 00:33:07.291 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:07.291 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:07.291 [2024-11-05 15:58:28.527421] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:33:19.512 15:58:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:19.512 15:58:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:19.512 15:58:40 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.81 00:33:19.512 15:58:40 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.81 00:33:19.512 15:58:40 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:33:19.512 15:58:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:33:19.512 15:58:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:33:19.512 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 15:58:40 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66635 00:33:26.121 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66635) - No such process 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66635 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67178 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67178 00:33:26.121 15:58:46 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67178 ']' 00:33:26.121 15:58:46 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.121 15:58:46 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:26.121 15:58:46 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.121 15:58:46 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.121 15:58:46 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:26.121 15:58:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:26.121 [2024-11-05 15:58:46.625298] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:33:26.121 [2024-11-05 15:58:46.625453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67178 ] 00:33:26.121 [2024-11-05 15:58:46.787524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.121 [2024-11-05 15:58:46.917813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:33:26.383 15:58:47 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:26.383 15:58:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:32.976 15:58:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.976 15:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:32.976 15:58:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:32.976 15:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:32.976 [2024-11-05 15:58:53.724450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:32.976 [2024-11-05 15:58:53.726156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.976 [2024-11-05 15:58:53.726199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.976 [2024-11-05 15:58:53.726216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.976 [2024-11-05 15:58:53.726238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.976 [2024-11-05 15:58:53.726247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.976 [2024-11-05 15:58:53.726258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.976 [2024-11-05 15:58:53.726267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.976 [2024-11-05 15:58:53.726277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.976 [2024-11-05 15:58:53.726286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.976 [2024-11-05 15:58:53.726300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.977 [2024-11-05 15:58:53.726308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.977 [2024-11-05 15:58:53.726319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:32.977 15:58:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.977 15:58:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:32.977 [2024-11-05 15:58:54.224468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:32.977 [2024-11-05 15:58:54.226530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.977 [2024-11-05 15:58:54.226577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.977 [2024-11-05 15:58:54.226594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.977 [2024-11-05 15:58:54.226617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.977 [2024-11-05 15:58:54.226629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.977 [2024-11-05 15:58:54.226639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.977 [2024-11-05 15:58:54.226651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.977 [2024-11-05 15:58:54.226660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.977 [2024-11-05 15:58:54.226671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.977 [2024-11-05 15:58:54.226681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:32.977 [2024-11-05 15:58:54.226695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.977 [2024-11-05 15:58:54.226703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.977 15:58:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:33:32.977 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:33.582 15:58:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.582 15:58:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:33.582 15:58:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:33.582 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:33.843 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:33.843 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:33.843 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:33.843 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:33.843 15:58:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:33.843 15:58:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:33.843 15:58:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:33.843 15:58:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:46.059 15:59:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.059 15:59:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:46.059 15:59:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:46.059 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:46.060 [2024-11-05 15:59:07.124658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:46.060 [2024-11-05 15:59:07.126156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.060 [2024-11-05 15:59:07.126191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.060 [2024-11-05 15:59:07.126202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.060 [2024-11-05 15:59:07.126219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.060 [2024-11-05 15:59:07.126226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.060 [2024-11-05 15:59:07.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.060 [2024-11-05 15:59:07.126242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.060 [2024-11-05 15:59:07.126249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.060 [2024-11-05 15:59:07.126256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.060 [2024-11-05 15:59:07.126264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.060 [2024-11-05 15:59:07.126270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.060 [2024-11-05 15:59:07.126278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:46.060 15:59:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:46.060 15:59:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:46.060 15:59:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:33:46.060 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:46.625 15:59:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.625 15:59:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:46.625 15:59:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:33:46.625 15:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:46.625 [2024-11-05 15:59:07.924671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:46.625 [2024-11-05 15:59:07.925870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.625 [2024-11-05 15:59:07.925903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.625 [2024-11-05 15:59:07.925917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.625 [2024-11-05 15:59:07.925931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.625 [2024-11-05 15:59:07.925940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.625 [2024-11-05 15:59:07.925948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.625 [2024-11-05 15:59:07.925957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.625 [2024-11-05 15:59:07.925963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.625 [2024-11-05 15:59:07.925972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.625 [2024-11-05 15:59:07.925979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:46.625 [2024-11-05 15:59:07.925987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.625 [2024-11-05 15:59:07.925993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.883 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:33:46.883 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:46.883 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:46.883 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:46.883 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:46.883 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:46.883 15:59:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.883 15:59:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:46.883 15:59:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:47.159 15:59:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:59.369 15:59:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.369 15:59:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:59.369 15:59:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:59.369 15:59:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.369 15:59:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:59.369 15:59:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:59.369 15:59:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:59.369 [2024-11-05 15:59:20.624871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:59.369 [2024-11-05 15:59:20.626141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.369 [2024-11-05 15:59:20.626175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.369 [2024-11-05 15:59:20.626186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.369 [2024-11-05 15:59:20.626203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.369 [2024-11-05 15:59:20.626210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.369 [2024-11-05 15:59:20.626220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.369 [2024-11-05 15:59:20.626227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.369 [2024-11-05 15:59:20.626235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.369 [2024-11-05 15:59:20.626242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.369 [2024-11-05 15:59:20.626250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.369 [2024-11-05 15:59:20.626257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.369 [2024-11-05 15:59:20.626265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:59.936 15:59:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.936 15:59:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:59.936 [2024-11-05 15:59:21.125227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:59.936 [2024-11-05 15:59:21.126455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.936 [2024-11-05 15:59:21.126488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.936 [2024-11-05 15:59:21.126500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.936 [2024-11-05 15:59:21.126514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.936 [2024-11-05 15:59:21.126523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.936 [2024-11-05 15:59:21.126531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.936 [2024-11-05 15:59:21.126539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.936 [2024-11-05 15:59:21.126546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.936 [2024-11-05 15:59:21.126555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.936 [2024-11-05 15:59:21.126562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:59.936 [2024-11-05 15:59:21.126570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.936 [2024-11-05 15:59:21.126576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.936 15:59:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:33:59.936 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:00.503 15:59:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.503 15:59:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:00.503 15:59:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:00.503 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:00.761 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:00.761 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:00.761 15:59:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@717 -- # time=46.32 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@718 -- # echo 46.32 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.32 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.32 2 00:34:12.974 remove_attach_helper took 46.32s to complete (handling 2 nvme drive(s)) 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:34:12.974 15:59:33 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:34:12.974 15:59:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:19.528 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:19.528 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:19.528 15:59:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:19.528 15:59:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.528 15:59:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:19.528 15:59:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:19.528 [2024-11-05 15:59:40.073280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:19.528 [2024-11-05 15:59:40.074271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.074307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.074318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.074337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.074345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.074355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.074363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.074371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.074385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.074394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.074401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.074412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.473290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:19.528 [2024-11-05 15:59:40.474236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.474268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.474281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.474295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.474304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.474311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.474320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.474326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.474334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 [2024-11-05 15:59:40.474341] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:19.528 [2024-11-05 15:59:40.474349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.528 [2024-11-05 15:59:40.474356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:19.528 15:59:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.528 15:59:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:19.528 15:59:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:19.528 15:59:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:31.761 15:59:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.761 15:59:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:31.761 15:59:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:31.761 15:59:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.761 15:59:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:31.761 15:59:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:31.761 15:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:31.761 [2024-11-05 15:59:52.973496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:31.761 [2024-11-05 15:59:52.974545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:31.761 [2024-11-05 15:59:52.974581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.761 [2024-11-05 15:59:52.974592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.761 [2024-11-05 15:59:52.974609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:31.761 [2024-11-05 15:59:52.974617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.761 [2024-11-05 15:59:52.974625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.762 [2024-11-05 15:59:52.974633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:31.762 [2024-11-05 15:59:52.974641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.762 [2024-11-05 15:59:52.974648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:31.762 [2024-11-05 15:59:52.974657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:31.762 [2024-11-05 15:59:52.974663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:31.762 [2024-11-05 15:59:52.974671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.326 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:32.326 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:32.326 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:32.326 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:32.326 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:32.326 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:32.326 15:59:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.326 15:59:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:32.326 [2024-11-05 15:59:53.473491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:32.326 [2024-11-05 15:59:53.474442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:32.326 [2024-11-05 15:59:53.474473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.326 [2024-11-05 15:59:53.474484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.326 [2024-11-05 15:59:53.474498] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:32.326 [2024-11-05 15:59:53.474510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.326 [2024-11-05 15:59:53.474516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.326 [2024-11-05 15:59:53.474525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:32.327 [2024-11-05 15:59:53.474532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.327 [2024-11-05 15:59:53.474540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.327 [2024-11-05 15:59:53.474548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:32.327 [2024-11-05 15:59:53.474556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.327 [2024-11-05 15:59:53.474562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.327 15:59:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.327 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:32.327 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:32.892 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:32.892 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:32.892 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:32.892 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:32.892 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:32.892 15:59:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:32.892 15:59:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.892 15:59:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:32.892 15:59:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:32.892 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:33.150 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:33.150 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:33.150 15:59:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:45.360 16:00:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.360 16:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:45.360 16:00:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:45.360 16:00:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.360 16:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:45.360 16:00:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.360 [2024-11-05 16:00:06.373732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:45.360 [2024-11-05 16:00:06.374766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.360 [2024-11-05 16:00:06.374801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.360 [2024-11-05 16:00:06.374812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.360 [2024-11-05 16:00:06.374829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.360 [2024-11-05 16:00:06.374836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.360 [2024-11-05 16:00:06.374844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.360 [2024-11-05 16:00:06.374852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.360 [2024-11-05 16:00:06.374864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.360 [2024-11-05 16:00:06.374871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.360 [2024-11-05 16:00:06.374880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.360 [2024-11-05 16:00:06.374887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.360 [2024-11-05 16:00:06.374895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:45.360 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:45.618 [2024-11-05 16:00:06.773746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:45.618 [2024-11-05 16:00:06.774756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.618 [2024-11-05 16:00:06.774782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.618 [2024-11-05 16:00:06.774794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.618 [2024-11-05 16:00:06.774809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.618 [2024-11-05 16:00:06.774818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.618 [2024-11-05 16:00:06.774825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.618 [2024-11-05 16:00:06.774834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.618 [2024-11-05 16:00:06.774841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.618 [2024-11-05 16:00:06.774849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.618 [2024-11-05 16:00:06.774856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:45.618 [2024-11-05 16:00:06.774866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:45.618 [2024-11-05 16:00:06.774873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:45.618 16:00:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.618 16:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:45.618 16:00:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:45.618 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:45.877 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:45.877 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:45.877 16:00:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:45.877 16:00:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.21 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.21 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:34:58.068 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:34:58.068 16:00:19 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67178 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67178 ']' 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67178 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67178 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:58.068 killing process with pid 67178 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67178' 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67178 00:34:58.068 16:00:19 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67178 00:34:59.439 16:00:20 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:59.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:59.699 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:59.699 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:34:59.957 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:34:59.957 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:34:59.957 ************************************ 00:34:59.957 END TEST sw_hotplug 00:34:59.957 ************************************ 00:34:59.957 00:34:59.957 real 2m30.673s 00:34:59.957 user 1m53.084s 00:34:59.957 sys 0m16.563s 00:34:59.957 16:00:21 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:59.957 16:00:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:59.957 16:00:21 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:34:59.957 16:00:21 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:34:59.957 16:00:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:59.957 16:00:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:59.957 16:00:21 -- common/autotest_common.sh@10 -- # set +x 00:34:59.957 ************************************ 00:34:59.957 START TEST nvme_xnvme 00:34:59.957 ************************************ 00:34:59.957 16:00:21 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:34:59.957 * Looking for test storage... 00:34:59.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:34:59.957 16:00:21 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:59.957 16:00:21 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:34:59.957 16:00:21 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:00.215 16:00:21 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.216 --rc genhtml_branch_coverage=1 00:35:00.216 --rc genhtml_function_coverage=1 00:35:00.216 --rc genhtml_legend=1 00:35:00.216 --rc geninfo_all_blocks=1 00:35:00.216 --rc geninfo_unexecuted_blocks=1 00:35:00.216 00:35:00.216 ' 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.216 --rc genhtml_branch_coverage=1 00:35:00.216 --rc genhtml_function_coverage=1 00:35:00.216 --rc genhtml_legend=1 00:35:00.216 --rc geninfo_all_blocks=1 00:35:00.216 --rc geninfo_unexecuted_blocks=1 00:35:00.216 00:35:00.216 ' 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.216 --rc genhtml_branch_coverage=1 00:35:00.216 --rc genhtml_function_coverage=1 00:35:00.216 --rc genhtml_legend=1 00:35:00.216 --rc geninfo_all_blocks=1 00:35:00.216 --rc geninfo_unexecuted_blocks=1 00:35:00.216 00:35:00.216 ' 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.216 --rc genhtml_branch_coverage=1 00:35:00.216 --rc genhtml_function_coverage=1 00:35:00.216 --rc genhtml_legend=1 00:35:00.216 --rc geninfo_all_blocks=1 00:35:00.216 --rc geninfo_unexecuted_blocks=1 00:35:00.216 00:35:00.216 ' 00:35:00.216 16:00:21 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.216 16:00:21 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.216 16:00:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.216 16:00:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.216 16:00:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.216 16:00:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:00.216 16:00:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.216 16:00:21 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:00.216 16:00:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:00.216 ************************************ 00:35:00.216 START TEST xnvme_to_malloc_dd_copy 00:35:00.216 ************************************ 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:00.216 16:00:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:00.216 { 00:35:00.216 "subsystems": [ 00:35:00.216 { 00:35:00.216 "subsystem": "bdev", 00:35:00.216 "config": [ 00:35:00.216 { 00:35:00.216 "params": { 00:35:00.216 "block_size": 512, 00:35:00.216 "num_blocks": 2097152, 00:35:00.216 "name": "malloc0" 00:35:00.216 }, 00:35:00.216 "method": "bdev_malloc_create" 00:35:00.216 }, 00:35:00.216 { 00:35:00.216 "params": { 00:35:00.216 "io_mechanism": "libaio", 00:35:00.216 "filename": "/dev/nullb0", 00:35:00.216 "name": "null0" 00:35:00.216 }, 00:35:00.216 "method": "bdev_xnvme_create" 00:35:00.216 }, 00:35:00.216 { 00:35:00.216 "method": "bdev_wait_for_examine" 00:35:00.216 } 00:35:00.216 ] 00:35:00.216 } 00:35:00.216 ] 00:35:00.216 } 00:35:00.216 [2024-11-05 16:00:21.458549] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:00.216 [2024-11-05 16:00:21.458659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68581 ] 00:35:00.474 [2024-11-05 16:00:21.615672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.474 [2024-11-05 16:00:21.690768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.394  [2024-11-05T16:00:24.688Z] Copying: 300/1024 [MB] (300 MBps) [2024-11-05T16:00:25.621Z] Copying: 601/1024 [MB] (301 MBps) [2024-11-05T16:00:25.908Z] Copying: 902/1024 [MB] (300 MBps) [2024-11-05T16:00:27.806Z] Copying: 1024/1024 [MB] (average 300 MBps) 00:35:06.444 00:35:06.445 16:00:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:35:06.445 16:00:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:35:06.445 16:00:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:06.445 16:00:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:06.445 { 00:35:06.445 "subsystems": [ 00:35:06.445 { 00:35:06.445 "subsystem": "bdev", 00:35:06.445 "config": [ 00:35:06.445 { 00:35:06.445 "params": { 00:35:06.445 "block_size": 512, 00:35:06.445 "num_blocks": 2097152, 00:35:06.445 "name": "malloc0" 00:35:06.445 }, 00:35:06.445 "method": "bdev_malloc_create" 00:35:06.445 }, 00:35:06.445 { 00:35:06.445 "params": { 00:35:06.445 "io_mechanism": "libaio", 00:35:06.445 "filename": "/dev/nullb0", 00:35:06.445 "name": "null0" 00:35:06.445 }, 00:35:06.445 "method": "bdev_xnvme_create" 00:35:06.445 }, 00:35:06.445 { 00:35:06.445 "method": "bdev_wait_for_examine" 00:35:06.445 } 00:35:06.445 ] 00:35:06.445 } 00:35:06.445 ] 00:35:06.445 } 00:35:06.702 [2024-11-05 16:00:27.823564] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:06.702 [2024-11-05 16:00:27.823680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68657 ] 00:35:06.702 [2024-11-05 16:00:27.980460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.702 [2024-11-05 16:00:28.056163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.619  [2024-11-05T16:00:30.914Z] Copying: 303/1024 [MB] (303 MBps) [2024-11-05T16:00:31.847Z] Copying: 607/1024 [MB] (304 MBps) [2024-11-05T16:00:32.413Z] Copying: 912/1024 [MB] (305 MBps) [2024-11-05T16:00:34.313Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:35:12.951 00:35:12.951 16:00:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:35:12.951 16:00:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:35:12.951 16:00:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:35:12.951 16:00:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:35:12.951 16:00:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:12.951 16:00:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:12.951 { 00:35:12.951 "subsystems": [ 00:35:12.951 { 00:35:12.951 "subsystem": "bdev", 00:35:12.951 "config": [ 00:35:12.951 { 00:35:12.951 "params": { 00:35:12.951 "block_size": 512, 00:35:12.951 "num_blocks": 2097152, 00:35:12.951 "name": "malloc0" 00:35:12.951 }, 00:35:12.951 "method": "bdev_malloc_create" 00:35:12.951 }, 00:35:12.951 { 00:35:12.951 "params": { 00:35:12.951 "io_mechanism": "io_uring", 00:35:12.951 "filename": "/dev/nullb0", 00:35:12.951 "name": "null0" 00:35:12.951 }, 00:35:12.951 "method": "bdev_xnvme_create" 00:35:12.951 }, 00:35:12.951 { 00:35:12.951 "method": "bdev_wait_for_examine" 00:35:12.951 } 00:35:12.951 ] 00:35:12.951 } 00:35:12.951 ] 00:35:12.951 } 00:35:12.951 [2024-11-05 16:00:34.139122] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:12.951 [2024-11-05 16:00:34.139234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68734 ] 00:35:12.951 [2024-11-05 16:00:34.295703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.209 [2024-11-05 16:00:34.373154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.107  [2024-11-05T16:00:37.403Z] Copying: 310/1024 [MB] (310 MBps) [2024-11-05T16:00:38.337Z] Copying: 622/1024 [MB] (311 MBps) [2024-11-05T16:00:38.594Z] Copying: 933/1024 [MB] (310 MBps) [2024-11-05T16:00:40.493Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:35:19.131 00:35:19.131 16:00:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:35:19.131 16:00:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:35:19.131 16:00:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:19.131 16:00:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:19.131 { 00:35:19.131 "subsystems": [ 00:35:19.131 { 00:35:19.131 "subsystem": "bdev", 00:35:19.131 "config": [ 00:35:19.131 { 00:35:19.131 "params": { 00:35:19.131 "block_size": 512, 00:35:19.131 "num_blocks": 2097152, 00:35:19.131 "name": "malloc0" 00:35:19.131 }, 00:35:19.131 "method": "bdev_malloc_create" 00:35:19.131 }, 00:35:19.131 { 00:35:19.131 "params": { 00:35:19.131 "io_mechanism": "io_uring", 00:35:19.131 "filename": "/dev/nullb0", 00:35:19.131 "name": "null0" 00:35:19.131 }, 00:35:19.131 "method": "bdev_xnvme_create" 00:35:19.131 }, 00:35:19.131 { 00:35:19.131 "method": "bdev_wait_for_examine" 00:35:19.131 } 00:35:19.131 ] 00:35:19.131 } 00:35:19.131 ] 00:35:19.131 } 00:35:19.131 [2024-11-05 16:00:40.382465] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:19.131 [2024-11-05 16:00:40.382984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68811 ] 00:35:19.390 [2024-11-05 16:00:40.539861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.390 [2024-11-05 16:00:40.616050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.321  [2024-11-05T16:00:43.617Z] Copying: 315/1024 [MB] (315 MBps) [2024-11-05T16:00:44.550Z] Copying: 631/1024 [MB] (316 MBps) [2024-11-05T16:00:44.808Z] Copying: 947/1024 [MB] (316 MBps) [2024-11-05T16:00:46.711Z] Copying: 1024/1024 [MB] (average 315 MBps) 00:35:25.349 00:35:25.349 16:00:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:35:25.349 16:00:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:35:25.349 00:35:25.349 real 0m25.159s 00:35:25.349 user 0m22.282s 00:35:25.349 sys 0m2.376s 00:35:25.349 16:00:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:25.349 16:00:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:25.349 ************************************ 00:35:25.349 END TEST xnvme_to_malloc_dd_copy 00:35:25.349 ************************************ 00:35:25.349 16:00:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:35:25.349 16:00:46 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:25.349 16:00:46 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:25.350 16:00:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:25.350 ************************************ 00:35:25.350 START TEST xnvme_bdevperf 00:35:25.350 ************************************ 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:25.350 16:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.350 { 00:35:25.350 "subsystems": [ 00:35:25.350 { 00:35:25.350 "subsystem": "bdev", 00:35:25.350 "config": [ 00:35:25.350 { 00:35:25.350 "params": { 00:35:25.350 "io_mechanism": "libaio", 00:35:25.350 "filename": "/dev/nullb0", 00:35:25.350 "name": "null0" 00:35:25.350 }, 00:35:25.350 "method": "bdev_xnvme_create" 00:35:25.350 }, 00:35:25.350 { 00:35:25.350 "method": "bdev_wait_for_examine" 00:35:25.350 } 00:35:25.350 ] 00:35:25.350 } 00:35:25.350 ] 00:35:25.350 } 00:35:25.350 [2024-11-05 16:00:46.659059] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:25.350 [2024-11-05 16:00:46.659168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68910 ] 00:35:25.624 [2024-11-05 16:00:46.813991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.624 [2024-11-05 16:00:46.890539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.881 Running I/O for 5 seconds... 00:35:27.748 200960.00 IOPS, 785.00 MiB/s [2024-11-05T16:00:50.484Z] 201056.00 IOPS, 785.38 MiB/s [2024-11-05T16:00:51.417Z] 200661.33 IOPS, 783.83 MiB/s [2024-11-05T16:00:52.350Z] 200096.00 IOPS, 781.62 MiB/s 00:35:30.988 Latency(us) 00:35:30.988 [2024-11-05T16:00:52.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.988 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:35:30.988 null0 : 5.00 200351.85 782.62 0.00 0.00 317.24 113.43 1562.78 00:35:30.988 [2024-11-05T16:00:52.350Z] =================================================================================================================== 00:35:30.988 [2024-11-05T16:00:52.350Z] Total : 200351.85 782.62 0.00 0.00 317.24 113.43 1562.78 00:35:31.554 16:00:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:35:31.554 16:00:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:35:31.554 16:00:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:35:31.554 16:00:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:35:31.554 16:00:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:31.554 16:00:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.554 { 00:35:31.554 "subsystems": [ 00:35:31.554 { 00:35:31.554 "subsystem": "bdev", 00:35:31.554 "config": [ 00:35:31.554 { 00:35:31.554 "params": { 00:35:31.554 "io_mechanism": "io_uring", 00:35:31.554 "filename": "/dev/nullb0", 00:35:31.554 "name": "null0" 00:35:31.554 }, 00:35:31.554 "method": "bdev_xnvme_create" 00:35:31.554 }, 00:35:31.554 { 00:35:31.554 "method": "bdev_wait_for_examine" 00:35:31.554 } 00:35:31.554 ] 00:35:31.554 } 00:35:31.554 ] 00:35:31.554 } 00:35:31.554 [2024-11-05 16:00:52.732337] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:31.554 [2024-11-05 16:00:52.732869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68979 ] 00:35:31.554 [2024-11-05 16:00:52.888223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.812 [2024-11-05 16:00:52.967869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.812 Running I/O for 5 seconds... 00:35:34.118 231104.00 IOPS, 902.75 MiB/s [2024-11-05T16:00:56.458Z] 230976.00 IOPS, 902.25 MiB/s [2024-11-05T16:00:57.391Z] 231061.33 IOPS, 902.58 MiB/s [2024-11-05T16:00:58.327Z] 231136.00 IOPS, 902.88 MiB/s [2024-11-05T16:00:58.327Z] 231219.20 IOPS, 903.20 MiB/s 00:35:36.965 Latency(us) 00:35:36.965 [2024-11-05T16:00:58.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.965 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:35:36.965 null0 : 5.00 231144.81 902.91 0.00 0.00 274.57 234.73 1562.78 00:35:36.965 [2024-11-05T16:00:58.327Z] =================================================================================================================== 00:35:36.965 [2024-11-05T16:00:58.327Z] Total : 231144.81 902.91 0.00 0.00 274.57 234.73 1562.78 00:35:37.531 16:00:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:35:37.531 16:00:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:35:37.531 00:35:37.531 real 0m12.151s 00:35:37.531 user 0m9.811s 00:35:37.531 sys 0m2.116s 00:35:37.531 16:00:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.531 16:00:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.531 ************************************ 00:35:37.531 END TEST xnvme_bdevperf 00:35:37.531 ************************************ 00:35:37.531 00:35:37.531 real 0m37.532s 00:35:37.531 user 0m32.204s 00:35:37.531 sys 0m4.610s 00:35:37.531 16:00:58 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.531 16:00:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:37.531 ************************************ 00:35:37.531 END TEST nvme_xnvme 00:35:37.531 ************************************ 00:35:37.531 16:00:58 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:35:37.531 16:00:58 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:37.531 16:00:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:37.531 16:00:58 -- common/autotest_common.sh@10 -- # set +x 00:35:37.531 ************************************ 00:35:37.531 START TEST blockdev_xnvme 00:35:37.531 ************************************ 00:35:37.531 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:35:37.531 * Looking for test storage... 00:35:37.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:37.531 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:37.531 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:37.531 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:35:37.789 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.789 16:00:58 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:35:37.789 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.789 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:37.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.789 --rc genhtml_branch_coverage=1 00:35:37.789 --rc genhtml_function_coverage=1 00:35:37.789 --rc genhtml_legend=1 00:35:37.789 --rc geninfo_all_blocks=1 00:35:37.789 --rc geninfo_unexecuted_blocks=1 00:35:37.789 00:35:37.789 ' 00:35:37.789 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:37.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.789 --rc genhtml_branch_coverage=1 00:35:37.789 --rc genhtml_function_coverage=1 00:35:37.789 --rc genhtml_legend=1 00:35:37.789 --rc geninfo_all_blocks=1 00:35:37.789 --rc geninfo_unexecuted_blocks=1 00:35:37.789 00:35:37.789 ' 00:35:37.789 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:37.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.789 --rc genhtml_branch_coverage=1 00:35:37.789 --rc genhtml_function_coverage=1 00:35:37.789 --rc genhtml_legend=1 00:35:37.789 --rc geninfo_all_blocks=1 00:35:37.789 --rc geninfo_unexecuted_blocks=1 00:35:37.789 00:35:37.790 ' 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:37.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.790 --rc genhtml_branch_coverage=1 00:35:37.790 --rc genhtml_function_coverage=1 00:35:37.790 --rc genhtml_legend=1 00:35:37.790 --rc geninfo_all_blocks=1 00:35:37.790 --rc geninfo_unexecuted_blocks=1 00:35:37.790 00:35:37.790 ' 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:35:37.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69121 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69121 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69121 ']' 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:37.790 16:00:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:37.790 16:00:58 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:37.790 [2024-11-05 16:00:59.001253] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:37.790 [2024-11-05 16:00:59.001678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69121 ] 00:35:38.048 [2024-11-05 16:00:59.153768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.048 [2024-11-05 16:00:59.247097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.653 16:00:59 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:38.653 16:00:59 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:35:38.653 16:00:59 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:35:38.653 16:00:59 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:35:38.653 16:00:59 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:35:38.653 16:00:59 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:35:38.653 16:00:59 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:38.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:38.911 Waiting for block devices as requested 00:35:39.169 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:39.169 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:39.169 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:39.169 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:44.483 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:35:44.483 nvme0n1 00:35:44.483 nvme1n1 00:35:44.483 nvme2n1 00:35:44.483 nvme2n2 00:35:44.483 nvme2n3 00:35:44.483 nvme3n1 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:35:44.483 16:01:05 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.483 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:35:44.484 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ab7bc2a9-2a87-4879-8ba6-39c3aa394a67"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ab7bc2a9-2a87-4879-8ba6-39c3aa394a67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "296225c3-b90a-4b01-a05f-ae3e8b5d3a56"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "296225c3-b90a-4b01-a05f-ae3e8b5d3a56",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4f8d6158-6569-436f-9aa1-e08d883e2fb7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f8d6158-6569-436f-9aa1-e08d883e2fb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2a5d04bd-77c7-45c9-a1a9-eb3ab2ea8b2d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a5d04bd-77c7-45c9-a1a9-eb3ab2ea8b2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ea430e54-cbde-4e54-a5d0-00192fdf419a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ea430e54-cbde-4e54-a5d0-00192fdf419a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "78c4f47f-4147-4e0e-8d42-1550a9ee20d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "78c4f47f-4147-4e0e-8d42-1550a9ee20d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:35:44.484 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:35:44.484 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:35:44.484 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:35:44.484 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:35:44.484 16:01:05 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69121 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69121 ']' 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69121 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69121 00:35:44.484 killing process with pid 69121 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69121' 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69121 00:35:44.484 16:01:05 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69121 00:35:45.858 16:01:06 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:45.858 16:01:06 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:35:45.858 16:01:06 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:35:45.858 16:01:06 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:45.858 16:01:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:45.858 ************************************ 00:35:45.858 START TEST bdev_hello_world 00:35:45.858 ************************************ 00:35:45.858 16:01:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:35:45.858 [2024-11-05 16:01:06.999235] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:45.858 [2024-11-05 16:01:06.999355] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69474 ] 00:35:45.858 [2024-11-05 16:01:07.157518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.115 [2024-11-05 16:01:07.254112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.372 [2024-11-05 16:01:07.579710] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:46.372 [2024-11-05 16:01:07.579772] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:35:46.372 [2024-11-05 16:01:07.579787] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:46.372 [2024-11-05 16:01:07.581615] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:46.372 [2024-11-05 16:01:07.581881] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:46.372 [2024-11-05 16:01:07.581902] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:46.372 [2024-11-05 16:01:07.582017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:46.372 00:35:46.372 [2024-11-05 16:01:07.582033] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:46.937 00:35:46.937 real 0m1.332s 00:35:46.937 user 0m1.074s 00:35:46.937 sys 0m0.147s 00:35:46.937 16:01:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:46.937 16:01:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:46.937 ************************************ 00:35:46.937 END TEST bdev_hello_world 00:35:46.937 ************************************ 00:35:47.195 16:01:08 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:35:47.195 16:01:08 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:47.195 16:01:08 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:47.195 16:01:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:47.195 ************************************ 00:35:47.195 START TEST bdev_bounds 00:35:47.195 ************************************ 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69516 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69516' 00:35:47.195 Process bdevio pid: 69516 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69516 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69516 ']' 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:47.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:47.195 16:01:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:47.195 [2024-11-05 16:01:08.369257] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:47.195 [2024-11-05 16:01:08.369378] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69516 ] 00:35:47.195 [2024-11-05 16:01:08.527660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:47.453 [2024-11-05 16:01:08.626817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.453 [2024-11-05 16:01:08.626893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.453 [2024-11-05 16:01:08.626906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:48.045 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:48.045 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:35:48.045 16:01:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:48.045 I/O targets: 00:35:48.045 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:35:48.045 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:35:48.045 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:48.045 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:48.045 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:48.045 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:35:48.045 00:35:48.045 00:35:48.045 CUnit - A unit testing framework for C - Version 2.1-3 00:35:48.045 http://cunit.sourceforge.net/ 00:35:48.045 00:35:48.045 00:35:48.045 Suite: bdevio tests on: nvme3n1 00:35:48.045 Test: blockdev write read block ...passed 00:35:48.045 Test: blockdev write zeroes read block ...passed 00:35:48.045 Test: blockdev write zeroes read no split ...passed 00:35:48.045 Test: blockdev write zeroes read split ...passed 00:35:48.045 Test: blockdev write zeroes read split partial ...passed 00:35:48.045 Test: blockdev reset ...passed 00:35:48.045 Test: blockdev write read 8 blocks ...passed 00:35:48.045 Test: blockdev write read size > 128k ...passed 00:35:48.045 Test: blockdev write read invalid size ...passed 00:35:48.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.045 Test: blockdev write read max offset ...passed 00:35:48.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.045 Test: blockdev writev readv 8 blocks ...passed 00:35:48.045 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.045 Test: blockdev writev readv block ...passed 00:35:48.045 Test: blockdev writev readv size > 128k ...passed 00:35:48.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.045 Test: blockdev comparev and writev ...passed 00:35:48.045 Test: blockdev nvme passthru rw ...passed 00:35:48.045 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.045 Test: blockdev nvme admin passthru ...passed 00:35:48.045 Test: blockdev copy ...passed 00:35:48.045 Suite: bdevio tests on: nvme2n3 00:35:48.045 Test: blockdev write read block ...passed 00:35:48.045 Test: blockdev write zeroes read block ...passed 00:35:48.046 Test: blockdev write zeroes read no split ...passed 00:35:48.046 Test: blockdev write zeroes read split ...passed 00:35:48.046 Test: blockdev write zeroes read split partial ...passed 00:35:48.046 Test: blockdev reset ...passed 00:35:48.046 Test: blockdev write read 8 blocks ...passed 00:35:48.046 Test: blockdev write read size > 128k ...passed 00:35:48.046 Test: blockdev write read invalid size ...passed 00:35:48.046 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.046 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.046 Test: blockdev write read max offset ...passed 00:35:48.046 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.046 Test: blockdev writev readv 8 blocks ...passed 00:35:48.046 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.046 Test: blockdev writev readv block ...passed 00:35:48.046 Test: blockdev writev readv size > 128k ...passed 00:35:48.046 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.046 Test: blockdev comparev and writev ...passed 00:35:48.046 Test: blockdev nvme passthru rw ...passed 00:35:48.046 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.046 Test: blockdev nvme admin passthru ...passed 00:35:48.046 Test: blockdev copy ...passed 00:35:48.046 Suite: bdevio tests on: nvme2n2 00:35:48.046 Test: blockdev write read block ...passed 00:35:48.046 Test: blockdev write zeroes read block ...passed 00:35:48.046 Test: blockdev write zeroes read no split ...passed 00:35:48.304 Test: blockdev write zeroes read split ...passed 00:35:48.304 Test: blockdev write zeroes read split partial ...passed 00:35:48.304 Test: blockdev reset ...passed 00:35:48.304 Test: blockdev write read 8 blocks ...passed 00:35:48.304 Test: blockdev write read size > 128k ...passed 00:35:48.304 Test: blockdev write read invalid size ...passed 00:35:48.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.304 Test: blockdev write read max offset ...passed 00:35:48.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.304 Test: blockdev writev readv 8 blocks ...passed 00:35:48.304 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.304 Test: blockdev writev readv block ...passed 00:35:48.304 Test: blockdev writev readv size > 128k ...passed 00:35:48.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.304 Test: blockdev comparev and writev ...passed 00:35:48.304 Test: blockdev nvme passthru rw ...passed 00:35:48.304 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.304 Test: blockdev nvme admin passthru ...passed 00:35:48.304 Test: blockdev copy ...passed 00:35:48.304 Suite: bdevio tests on: nvme2n1 00:35:48.304 Test: blockdev write read block ...passed 00:35:48.304 Test: blockdev write zeroes read block ...passed 00:35:48.304 Test: blockdev write zeroes read no split ...passed 00:35:48.304 Test: blockdev write zeroes read split ...passed 00:35:48.304 Test: blockdev write zeroes read split partial ...passed 00:35:48.304 Test: blockdev reset ...passed 00:35:48.304 Test: blockdev write read 8 blocks ...passed 00:35:48.304 Test: blockdev write read size > 128k ...passed 00:35:48.304 Test: blockdev write read invalid size ...passed 00:35:48.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.304 Test: blockdev write read max offset ...passed 00:35:48.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.304 Test: blockdev writev readv 8 blocks ...passed 00:35:48.304 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.304 Test: blockdev writev readv block ...passed 00:35:48.304 Test: blockdev writev readv size > 128k ...passed 00:35:48.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.304 Test: blockdev comparev and writev ...passed 00:35:48.304 Test: blockdev nvme passthru rw ...passed 00:35:48.304 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.304 Test: blockdev nvme admin passthru ...passed 00:35:48.304 Test: blockdev copy ...passed 00:35:48.304 Suite: bdevio tests on: nvme1n1 00:35:48.304 Test: blockdev write read block ...passed 00:35:48.304 Test: blockdev write zeroes read block ...passed 00:35:48.304 Test: blockdev write zeroes read no split ...passed 00:35:48.304 Test: blockdev write zeroes read split ...passed 00:35:48.304 Test: blockdev write zeroes read split partial ...passed 00:35:48.304 Test: blockdev reset ...passed 00:35:48.304 Test: blockdev write read 8 blocks ...passed 00:35:48.304 Test: blockdev write read size > 128k ...passed 00:35:48.304 Test: blockdev write read invalid size ...passed 00:35:48.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.304 Test: blockdev write read max offset ...passed 00:35:48.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.304 Test: blockdev writev readv 8 blocks ...passed 00:35:48.304 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.304 Test: blockdev writev readv block ...passed 00:35:48.304 Test: blockdev writev readv size > 128k ...passed 00:35:48.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.304 Test: blockdev comparev and writev ...passed 00:35:48.304 Test: blockdev nvme passthru rw ...passed 00:35:48.304 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.304 Test: blockdev nvme admin passthru ...passed 00:35:48.304 Test: blockdev copy ...passed 00:35:48.304 Suite: bdevio tests on: nvme0n1 00:35:48.304 Test: blockdev write read block ...passed 00:35:48.304 Test: blockdev write zeroes read block ...passed 00:35:48.304 Test: blockdev write zeroes read no split ...passed 00:35:48.304 Test: blockdev write zeroes read split ...passed 00:35:48.304 Test: blockdev write zeroes read split partial ...passed 00:35:48.304 Test: blockdev reset ...passed 00:35:48.304 Test: blockdev write read 8 blocks ...passed 00:35:48.304 Test: blockdev write read size > 128k ...passed 00:35:48.304 Test: blockdev write read invalid size ...passed 00:35:48.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.305 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.305 Test: blockdev write read max offset ...passed 00:35:48.305 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.305 Test: blockdev writev readv 8 blocks ...passed 00:35:48.305 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.305 Test: blockdev writev readv block ...passed 00:35:48.305 Test: blockdev writev readv size > 128k ...passed 00:35:48.305 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.305 Test: blockdev comparev and writev ...passed 00:35:48.305 Test: blockdev nvme passthru rw ...passed 00:35:48.305 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.305 Test: blockdev nvme admin passthru ...passed 00:35:48.305 Test: blockdev copy ...passed 00:35:48.305 00:35:48.305 Run Summary: Type Total Ran Passed Failed Inactive 00:35:48.305 suites 6 6 n/a 0 0 00:35:48.305 tests 138 138 138 0 0 00:35:48.305 asserts 780 780 780 0 n/a 00:35:48.305 00:35:48.305 Elapsed time = 0.835 seconds 00:35:48.305 0 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69516 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69516 ']' 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69516 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69516 00:35:48.305 killing process with pid 69516 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69516' 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69516 00:35:48.305 16:01:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69516 00:35:49.239 16:01:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:35:49.239 00:35:49.239 real 0m2.024s 00:35:49.239 user 0m5.061s 00:35:49.239 sys 0m0.289s 00:35:49.239 16:01:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:49.239 16:01:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:49.239 ************************************ 00:35:49.239 END TEST bdev_bounds 00:35:49.239 ************************************ 00:35:49.239 16:01:10 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:35:49.239 16:01:10 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:35:49.239 16:01:10 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:49.239 16:01:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:49.239 ************************************ 00:35:49.239 START TEST bdev_nbd 00:35:49.239 ************************************ 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69574 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69574 /var/tmp/spdk-nbd.sock 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69574 ']' 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:49.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:49.239 16:01:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:49.239 [2024-11-05 16:01:10.437680] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:35:49.239 [2024-11-05 16:01:10.437815] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.239 [2024-11-05 16:01:10.591848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.497 [2024-11-05 16:01:10.685262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:50.063 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:50.321 1+0 records in 00:35:50.321 1+0 records out 00:35:50.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424362 s, 9.7 MB/s 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:50.321 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:50.592 1+0 records in 00:35:50.592 1+0 records out 00:35:50.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136022 s, 3.0 MB/s 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:50.592 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:50.593 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:50.593 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:50.593 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:50.852 16:01:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:50.852 1+0 records in 00:35:50.852 1+0 records out 00:35:50.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400186 s, 10.2 MB/s 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:50.852 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:51.110 1+0 records in 00:35:51.110 1+0 records out 00:35:51.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00090782 s, 4.5 MB/s 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:51.110 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:51.111 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:51.111 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:51.111 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:35:51.111 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:35:51.111 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:51.372 1+0 records in 00:35:51.372 1+0 records out 00:35:51.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377735 s, 10.8 MB/s 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:51.372 1+0 records in 00:35:51.372 1+0 records out 00:35:51.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104182 s, 3.9 MB/s 00:35:51.372 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:51.373 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd0", 00:35:51.635 "bdev_name": "nvme0n1" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd1", 00:35:51.635 "bdev_name": "nvme1n1" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd2", 00:35:51.635 "bdev_name": "nvme2n1" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd3", 00:35:51.635 "bdev_name": "nvme2n2" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd4", 00:35:51.635 "bdev_name": "nvme2n3" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd5", 00:35:51.635 "bdev_name": "nvme3n1" 00:35:51.635 } 00:35:51.635 ]' 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd0", 00:35:51.635 "bdev_name": "nvme0n1" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd1", 00:35:51.635 "bdev_name": "nvme1n1" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd2", 00:35:51.635 "bdev_name": "nvme2n1" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd3", 00:35:51.635 "bdev_name": "nvme2n2" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd4", 00:35:51.635 "bdev_name": "nvme2n3" 00:35:51.635 }, 00:35:51.635 { 00:35:51.635 "nbd_device": "/dev/nbd5", 00:35:51.635 "bdev_name": "nvme3n1" 00:35:51.635 } 00:35:51.635 ]' 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:51.635 16:01:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:51.894 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:51.895 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:51.895 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:52.153 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:52.412 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:52.669 16:01:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:52.928 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:53.190 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:35:53.474 /dev/nbd0 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:53.474 1+0 records in 00:35:53.474 1+0 records out 00:35:53.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351983 s, 11.6 MB/s 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:53.474 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:35:53.732 /dev/nbd1 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:53.732 1+0 records in 00:35:53.732 1+0 records out 00:35:53.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579767 s, 7.1 MB/s 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:53.732 16:01:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:35:53.991 /dev/nbd10 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:53.991 1+0 records in 00:35:53.991 1+0 records out 00:35:53.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041833 s, 9.8 MB/s 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:53.991 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:35:54.250 /dev/nbd11 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.250 1+0 records in 00:35:54.250 1+0 records out 00:35:54.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677556 s, 6.0 MB/s 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:54.250 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:35:54.250 /dev/nbd12 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.508 1+0 records in 00:35:54.508 1+0 records out 00:35:54.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426728 s, 9.6 MB/s 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:35:54.508 /dev/nbd13 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.508 1+0 records in 00:35:54.508 1+0 records out 00:35:54.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308984 s, 13.3 MB/s 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:54.508 16:01:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd0", 00:35:54.767 "bdev_name": "nvme0n1" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd1", 00:35:54.767 "bdev_name": "nvme1n1" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd10", 00:35:54.767 "bdev_name": "nvme2n1" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd11", 00:35:54.767 "bdev_name": "nvme2n2" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd12", 00:35:54.767 "bdev_name": "nvme2n3" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd13", 00:35:54.767 "bdev_name": "nvme3n1" 00:35:54.767 } 00:35:54.767 ]' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd0", 00:35:54.767 "bdev_name": "nvme0n1" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd1", 00:35:54.767 "bdev_name": "nvme1n1" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd10", 00:35:54.767 "bdev_name": "nvme2n1" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd11", 00:35:54.767 "bdev_name": "nvme2n2" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd12", 00:35:54.767 "bdev_name": "nvme2n3" 00:35:54.767 }, 00:35:54.767 { 00:35:54.767 "nbd_device": "/dev/nbd13", 00:35:54.767 "bdev_name": "nvme3n1" 00:35:54.767 } 00:35:54.767 ]' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:54.767 /dev/nbd1 00:35:54.767 /dev/nbd10 00:35:54.767 /dev/nbd11 00:35:54.767 /dev/nbd12 00:35:54.767 /dev/nbd13' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:54.767 /dev/nbd1 00:35:54.767 /dev/nbd10 00:35:54.767 /dev/nbd11 00:35:54.767 /dev/nbd12 00:35:54.767 /dev/nbd13' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:54.767 256+0 records in 00:35:54.767 256+0 records out 00:35:54.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608011 s, 172 MB/s 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:54.767 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:55.028 256+0 records in 00:35:55.028 256+0 records out 00:35:55.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0473526 s, 22.1 MB/s 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:35:55.028 256+0 records in 00:35:55.028 256+0 records out 00:35:55.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0622726 s, 16.8 MB/s 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:35:55.028 256+0 records in 00:35:55.028 256+0 records out 00:35:55.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0498369 s, 21.0 MB/s 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:35:55.028 256+0 records in 00:35:55.028 256+0 records out 00:35:55.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0477394 s, 22.0 MB/s 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:55.028 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:35:55.290 256+0 records in 00:35:55.290 256+0 records out 00:35:55.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177322 s, 5.9 MB/s 00:35:55.290 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:55.290 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:35:55.551 256+0 records in 00:35:55.551 256+0 records out 00:35:55.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210249 s, 5.0 MB/s 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:55.551 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:55.815 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:55.815 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:55.815 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:55.815 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:55.815 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:55.815 16:01:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:55.815 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:55.815 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:55.815 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:55.815 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.081 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.340 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.602 16:01:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:56.863 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:35:57.121 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:57.379 malloc_lvol_verify 00:35:57.379 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:57.379 cf5bfe1e-27b0-400c-9a22-9643773973cc 00:35:57.379 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:57.637 840e136b-245b-42e8-aa58-ae9f66c34d26 00:35:57.637 16:01:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:57.894 /dev/nbd0 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:35:57.894 mke2fs 1.47.0 (5-Feb-2023) 00:35:57.894 Discarding device blocks: 0/4096 done 00:35:57.894 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:57.894 00:35:57.894 Allocating group tables: 0/1 done 00:35:57.894 Writing inode tables: 0/1 done 00:35:57.894 Creating journal (1024 blocks): done 00:35:57.894 Writing superblocks and filesystem accounting information: 0/1 done 00:35:57.894 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:57.894 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:57.895 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69574 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69574 ']' 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69574 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:58.152 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69574 00:35:58.153 killing process with pid 69574 00:35:58.153 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:58.153 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:58.153 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69574' 00:35:58.153 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69574 00:35:58.153 16:01:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69574 00:35:58.717 ************************************ 00:35:58.717 END TEST bdev_nbd 00:35:58.717 ************************************ 00:35:58.717 16:01:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:35:58.718 00:35:58.718 real 0m9.651s 00:35:58.718 user 0m13.717s 00:35:58.718 sys 0m3.191s 00:35:58.718 16:01:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:58.718 16:01:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:58.718 16:01:20 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:35:58.718 16:01:20 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:35:58.718 16:01:20 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:35:58.718 16:01:20 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:35:58.718 16:01:20 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:58.718 16:01:20 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:58.718 16:01:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:58.718 ************************************ 00:35:58.718 START TEST bdev_fio 00:35:58.718 ************************************ 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:35:58.718 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:35:58.718 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:58.980 ************************************ 00:35:58.980 START TEST bdev_fio_rw_verify 00:35:58.980 ************************************ 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:58.980 16:01:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:58.980 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:58.980 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:58.980 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:58.980 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:58.980 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:58.980 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:58.980 fio-3.35 00:35:58.980 Starting 6 threads 00:36:11.236 00:36:11.236 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69967: Tue Nov 5 16:01:31 2024 00:36:11.236 read: IOPS=25.5k, BW=99.6MiB/s (104MB/s)(996MiB/10002msec) 00:36:11.236 slat (usec): min=2, max=2828, avg= 5.01, stdev=13.07 00:36:11.236 clat (usec): min=64, max=9056, avg=703.05, stdev=684.82 00:36:11.236 lat (usec): min=67, max=9066, avg=708.07, stdev=685.44 00:36:11.236 clat percentiles (usec): 00:36:11.236 | 50.000th=[ 437], 99.000th=[ 3228], 99.900th=[ 4686], 99.990th=[ 6587], 00:36:11.236 | 99.999th=[ 8979] 00:36:11.236 write: IOPS=25.9k, BW=101MiB/s (106MB/s)(1011MiB/10002msec); 0 zone resets 00:36:11.236 slat (usec): min=10, max=4260, avg=31.40, stdev=105.12 00:36:11.236 clat (usec): min=58, max=8388, avg=903.27, stdev=776.43 00:36:11.236 lat (usec): min=72, max=8839, avg=934.67, stdev=791.34 00:36:11.236 clat percentiles (usec): 00:36:11.236 | 50.000th=[ 611], 99.000th=[ 3720], 99.900th=[ 5342], 99.990th=[ 6849], 00:36:11.236 | 99.999th=[ 8356] 00:36:11.236 bw ( KiB/s): min=51301, max=167880, per=100.00%, avg=106271.00, stdev=5492.03, samples=114 00:36:11.236 iops : min=12823, max=41968, avg=26566.68, stdev=1373.02, samples=114 00:36:11.236 lat (usec) : 100=0.14%, 250=13.70%, 500=33.09%, 750=19.23%, 1000=9.00% 00:36:11.236 lat (msec) : 2=16.75%, 4=7.63%, 10=0.47% 00:36:11.236 cpu : usr=44.00%, sys=31.01%, ctx=7898, majf=0, minf=22322 00:36:11.236 IO depths : 1=11.3%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.236 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.236 issued rwts: total=255053,258741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.236 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:11.236 00:36:11.236 Run status group 0 (all jobs): 00:36:11.236 READ: bw=99.6MiB/s (104MB/s), 99.6MiB/s-99.6MiB/s (104MB/s-104MB/s), io=996MiB (1045MB), run=10002-10002msec 00:36:11.236 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1011MiB (1060MB), run=10002-10002msec 00:36:11.236 ----------------------------------------------------- 00:36:11.236 Suppressions used: 00:36:11.236 count bytes template 00:36:11.236 6 48 /usr/src/fio/parse.c 00:36:11.236 3506 336576 /usr/src/fio/iolog.c 00:36:11.236 1 8 libtcmalloc_minimal.so 00:36:11.236 1 904 libcrypto.so 00:36:11.236 ----------------------------------------------------- 00:36:11.236 00:36:11.236 00:36:11.236 real 0m11.884s 00:36:11.236 user 0m27.879s 00:36:11.236 sys 0m18.890s 00:36:11.236 16:01:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:11.236 ************************************ 00:36:11.236 END TEST bdev_fio_rw_verify 00:36:11.236 ************************************ 00:36:11.236 16:01:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:36:11.236 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ab7bc2a9-2a87-4879-8ba6-39c3aa394a67"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ab7bc2a9-2a87-4879-8ba6-39c3aa394a67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "296225c3-b90a-4b01-a05f-ae3e8b5d3a56"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "296225c3-b90a-4b01-a05f-ae3e8b5d3a56",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4f8d6158-6569-436f-9aa1-e08d883e2fb7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f8d6158-6569-436f-9aa1-e08d883e2fb7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2a5d04bd-77c7-45c9-a1a9-eb3ab2ea8b2d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a5d04bd-77c7-45c9-a1a9-eb3ab2ea8b2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ea430e54-cbde-4e54-a5d0-00192fdf419a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ea430e54-cbde-4e54-a5d0-00192fdf419a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "78c4f47f-4147-4e0e-8d42-1550a9ee20d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "78c4f47f-4147-4e0e-8d42-1550a9ee20d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:11.237 /home/vagrant/spdk_repo/spdk 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:36:11.237 00:36:11.237 real 0m12.046s 00:36:11.237 user 0m27.964s 00:36:11.237 sys 0m18.949s 00:36:11.237 ************************************ 00:36:11.237 END TEST bdev_fio 00:36:11.237 ************************************ 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:11.237 16:01:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:11.237 16:01:32 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:11.237 16:01:32 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:11.237 16:01:32 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:36:11.237 16:01:32 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:11.237 16:01:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:11.237 ************************************ 00:36:11.237 START TEST bdev_verify 00:36:11.237 ************************************ 00:36:11.237 16:01:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:11.237 [2024-11-05 16:01:32.228682] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:11.237 [2024-11-05 16:01:32.228804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70143 ] 00:36:11.237 [2024-11-05 16:01:32.382415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:11.237 [2024-11-05 16:01:32.485177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.237 [2024-11-05 16:01:32.485280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.497 Running I/O for 5 seconds... 00:36:13.814 24320.00 IOPS, 95.00 MiB/s [2024-11-05T16:01:36.166Z] 24832.00 IOPS, 97.00 MiB/s [2024-11-05T16:01:37.111Z] 24608.00 IOPS, 96.12 MiB/s [2024-11-05T16:01:38.058Z] 24152.00 IOPS, 94.34 MiB/s [2024-11-05T16:01:38.058Z] 23903.60 IOPS, 93.37 MiB/s 00:36:16.696 Latency(us) 00:36:16.696 [2024-11-05T16:01:38.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.696 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x0 length 0xa0000 00:36:16.696 nvme0n1 : 5.05 1850.14 7.23 0.00 0.00 69051.00 6906.49 66947.54 00:36:16.696 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0xa0000 length 0xa0000 00:36:16.696 nvme0n1 : 5.06 1797.40 7.02 0.00 0.00 71079.20 11393.18 71383.83 00:36:16.696 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x0 length 0xbd0bd 00:36:16.696 nvme1n1 : 5.05 2654.14 10.37 0.00 0.00 47980.61 4184.22 68157.44 00:36:16.696 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:36:16.696 nvme1n1 : 5.05 2662.36 10.40 0.00 0.00 47764.49 4411.08 64124.46 00:36:16.696 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x0 length 0x80000 00:36:16.696 nvme2n1 : 5.06 1870.50 7.31 0.00 0.00 67932.34 10485.76 67754.14 00:36:16.696 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x80000 length 0x80000 00:36:16.696 nvme2n1 : 5.06 1846.95 7.21 0.00 0.00 68862.93 6604.01 74610.22 00:36:16.696 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x0 length 0x80000 00:36:16.696 nvme2n2 : 5.05 1848.85 7.22 0.00 0.00 68570.85 12351.02 64931.05 00:36:16.696 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x80000 length 0x80000 00:36:16.696 nvme2n2 : 5.06 1819.70 7.11 0.00 0.00 69762.32 6604.01 72190.42 00:36:16.696 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x0 length 0x80000 00:36:16.696 nvme2n3 : 5.07 1868.19 7.30 0.00 0.00 67739.98 3503.66 63317.86 00:36:16.696 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x80000 length 0x80000 00:36:16.696 nvme2n3 : 5.05 1799.73 7.03 0.00 0.00 70417.61 9679.16 70980.53 00:36:16.696 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x0 length 0x20000 00:36:16.696 nvme3n1 : 5.06 1846.13 7.21 0.00 0.00 68450.47 5973.86 69367.34 00:36:16.696 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:16.696 Verification LBA range: start 0x20000 length 0x20000 00:36:16.696 nvme3n1 : 5.07 1818.51 7.10 0.00 0.00 69569.94 6099.89 72190.42 00:36:16.696 [2024-11-05T16:01:38.058Z] =================================================================================================================== 00:36:16.696 [2024-11-05T16:01:38.058Z] Total : 23682.60 92.51 0.00 0.00 64365.18 3503.66 74610.22 00:36:17.656 00:36:17.656 real 0m6.538s 00:36:17.656 user 0m10.796s 00:36:17.656 sys 0m1.352s 00:36:17.656 16:01:38 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:17.656 ************************************ 00:36:17.656 END TEST bdev_verify 00:36:17.656 16:01:38 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:36:17.656 ************************************ 00:36:17.656 16:01:38 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:17.656 16:01:38 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:36:17.656 16:01:38 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:17.656 16:01:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:17.656 ************************************ 00:36:17.656 START TEST bdev_verify_big_io 00:36:17.656 ************************************ 00:36:17.656 16:01:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:17.656 [2024-11-05 16:01:38.828752] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:17.656 [2024-11-05 16:01:38.828864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70237 ] 00:36:17.656 [2024-11-05 16:01:38.985463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:17.941 [2024-11-05 16:01:39.088653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.942 [2024-11-05 16:01:39.088784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.200 Running I/O for 5 seconds... 00:36:24.041 1184.00 IOPS, 74.00 MiB/s [2024-11-05T16:01:45.664Z] 2435.50 IOPS, 152.22 MiB/s [2024-11-05T16:01:45.664Z] 2765.00 IOPS, 172.81 MiB/s 00:36:24.302 Latency(us) 00:36:24.302 [2024-11-05T16:01:45.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.302 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x0 length 0xa000 00:36:24.302 nvme0n1 : 5.92 97.33 6.08 0.00 0.00 1262869.62 28432.54 1329271.73 00:36:24.302 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0xa000 length 0xa000 00:36:24.302 nvme0n1 : 5.85 131.35 8.21 0.00 0.00 941490.81 97598.23 948557.98 00:36:24.302 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x0 length 0xbd0b 00:36:24.302 nvme1n1 : 5.90 119.34 7.46 0.00 0.00 1023983.24 46177.67 1626099.40 00:36:24.302 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0xbd0b length 0xbd0b 00:36:24.302 nvme1n1 : 5.85 188.26 11.77 0.00 0.00 638815.90 7208.96 832408.02 00:36:24.302 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x0 length 0x8000 00:36:24.302 nvme2n1 : 5.92 118.91 7.43 0.00 0.00 1000911.31 17241.01 1438968.91 00:36:24.302 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x8000 length 0x8000 00:36:24.302 nvme2n1 : 5.85 101.17 6.32 0.00 0.00 1156614.34 28029.24 2865032.27 00:36:24.302 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x0 length 0x8000 00:36:24.302 nvme2n2 : 5.91 116.40 7.28 0.00 0.00 989038.00 6956.90 1703532.70 00:36:24.302 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x8000 length 0x8000 00:36:24.302 nvme2n2 : 5.87 114.54 7.16 0.00 0.00 993951.56 23996.26 1451874.46 00:36:24.302 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x0 length 0x8000 00:36:24.302 nvme2n3 : 5.91 127.16 7.95 0.00 0.00 877631.08 41338.09 1213121.77 00:36:24.302 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x8000 length 0x8000 00:36:24.302 nvme2n3 : 5.85 117.86 7.37 0.00 0.00 950262.54 8166.79 2310093.59 00:36:24.302 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x0 length 0x2000 00:36:24.302 nvme3n1 : 5.92 180.98 11.31 0.00 0.00 597834.75 1663.61 980821.86 00:36:24.302 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:24.302 Verification LBA range: start 0x2000 length 0x2000 00:36:24.302 nvme3n1 : 5.86 120.06 7.50 0.00 0.00 904977.76 9124.63 2606921.26 00:36:24.302 [2024-11-05T16:01:45.664Z] =================================================================================================================== 00:36:24.302 [2024-11-05T16:01:45.664Z] Total : 1533.35 95.83 0.00 0.00 909463.83 1663.61 2865032.27 00:36:25.246 00:36:25.246 real 0m7.585s 00:36:25.246 user 0m14.021s 00:36:25.246 sys 0m0.372s 00:36:25.246 16:01:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:25.246 16:01:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:36:25.246 ************************************ 00:36:25.246 END TEST bdev_verify_big_io 00:36:25.246 ************************************ 00:36:25.246 16:01:46 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:25.246 16:01:46 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:36:25.246 16:01:46 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:25.246 16:01:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:25.246 ************************************ 00:36:25.246 START TEST bdev_write_zeroes 00:36:25.246 ************************************ 00:36:25.246 16:01:46 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:25.246 [2024-11-05 16:01:46.483404] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:25.246 [2024-11-05 16:01:46.483526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70347 ] 00:36:25.507 [2024-11-05 16:01:46.644819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.507 [2024-11-05 16:01:46.743237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.768 Running I/O for 1 seconds... 00:36:27.152 73617.00 IOPS, 287.57 MiB/s 00:36:27.152 Latency(us) 00:36:27.152 [2024-11-05T16:01:48.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.152 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:27.152 nvme0n1 : 1.03 12014.37 46.93 0.00 0.00 10641.81 4184.22 27625.94 00:36:27.152 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:27.152 nvme1n1 : 1.03 13003.03 50.79 0.00 0.00 9824.16 3453.24 25710.28 00:36:27.152 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:27.152 nvme2n1 : 1.03 11843.16 46.26 0.00 0.00 10739.93 3377.62 26416.05 00:36:27.152 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:27.152 nvme2n2 : 1.03 11823.89 46.19 0.00 0.00 10738.49 5343.70 27222.65 00:36:27.152 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:27.152 nvme2n3 : 1.03 11804.93 46.11 0.00 0.00 10744.15 5444.53 28029.24 00:36:27.152 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:27.152 nvme3n1 : 1.03 11909.79 46.52 0.00 0.00 10641.89 4234.63 28835.84 00:36:27.152 [2024-11-05T16:01:48.514Z] =================================================================================================================== 00:36:27.152 [2024-11-05T16:01:48.514Z] Total : 72399.18 282.81 0.00 0.00 10544.05 3377.62 28835.84 00:36:27.724 00:36:27.724 real 0m2.467s 00:36:27.724 user 0m1.798s 00:36:27.724 sys 0m0.442s 00:36:27.724 ************************************ 00:36:27.724 END TEST bdev_write_zeroes 00:36:27.724 ************************************ 00:36:27.724 16:01:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:27.724 16:01:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:36:27.724 16:01:48 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:27.724 16:01:48 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:36:27.724 16:01:48 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:27.724 16:01:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:27.724 ************************************ 00:36:27.724 START TEST bdev_json_nonenclosed 00:36:27.724 ************************************ 00:36:27.724 16:01:48 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:27.724 [2024-11-05 16:01:49.019561] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:27.724 [2024-11-05 16:01:49.019673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70390 ] 00:36:28.000 [2024-11-05 16:01:49.176004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.000 [2024-11-05 16:01:49.278212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.000 [2024-11-05 16:01:49.278303] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:36:28.000 [2024-11-05 16:01:49.278320] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:28.000 [2024-11-05 16:01:49.278346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:28.261 ************************************ 00:36:28.261 END TEST bdev_json_nonenclosed 00:36:28.261 00:36:28.261 real 0m0.503s 00:36:28.261 user 0m0.300s 00:36:28.261 sys 0m0.098s 00:36:28.261 16:01:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:28.261 16:01:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:36:28.261 ************************************ 00:36:28.261 16:01:49 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:28.261 16:01:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:36:28.261 16:01:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:28.261 16:01:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:28.261 ************************************ 00:36:28.261 START TEST bdev_json_nonarray 00:36:28.261 ************************************ 00:36:28.261 16:01:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:28.261 [2024-11-05 16:01:49.582436] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:28.261 [2024-11-05 16:01:49.582562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70420 ] 00:36:28.526 [2024-11-05 16:01:49.738963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.526 [2024-11-05 16:01:49.841210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.526 [2024-11-05 16:01:49.841305] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:28.526 [2024-11-05 16:01:49.841323] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:28.526 [2024-11-05 16:01:49.841333] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:28.786 00:36:28.786 real 0m0.506s 00:36:28.786 user 0m0.307s 00:36:28.786 sys 0m0.095s 00:36:28.786 16:01:50 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:28.786 16:01:50 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:36:28.786 ************************************ 00:36:28.786 END TEST bdev_json_nonarray 00:36:28.786 ************************************ 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:36:28.786 16:01:50 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:29.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:55.948 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:55.948 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:55.948 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:36:55.948 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:36:55.948 00:36:55.948 real 1m16.183s 00:36:55.948 user 1m26.031s 00:36:55.948 sys 1m36.847s 00:36:55.948 16:02:14 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:55.948 ************************************ 00:36:55.948 END TEST blockdev_xnvme 00:36:55.948 ************************************ 00:36:55.948 16:02:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:55.948 16:02:15 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:36:55.948 16:02:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:55.948 16:02:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:55.948 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:36:55.948 ************************************ 00:36:55.948 START TEST ublk 00:36:55.948 ************************************ 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:36:55.948 * Looking for test storage... 00:36:55.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:55.948 16:02:15 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:55.948 16:02:15 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:36:55.948 16:02:15 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:36:55.948 16:02:15 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:36:55.948 16:02:15 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:55.948 16:02:15 ublk -- scripts/common.sh@344 -- # case "$op" in 00:36:55.948 16:02:15 ublk -- scripts/common.sh@345 -- # : 1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:55.948 16:02:15 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:55.948 16:02:15 ublk -- scripts/common.sh@365 -- # decimal 1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@353 -- # local d=1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:55.948 16:02:15 ublk -- scripts/common.sh@355 -- # echo 1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:36:55.948 16:02:15 ublk -- scripts/common.sh@366 -- # decimal 2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@353 -- # local d=2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:55.948 16:02:15 ublk -- scripts/common.sh@355 -- # echo 2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:36:55.948 16:02:15 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:55.948 16:02:15 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:55.948 16:02:15 ublk -- scripts/common.sh@368 -- # return 0 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.948 --rc genhtml_branch_coverage=1 00:36:55.948 --rc genhtml_function_coverage=1 00:36:55.948 --rc genhtml_legend=1 00:36:55.948 --rc geninfo_all_blocks=1 00:36:55.948 --rc geninfo_unexecuted_blocks=1 00:36:55.948 00:36:55.948 ' 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.948 --rc genhtml_branch_coverage=1 00:36:55.948 --rc genhtml_function_coverage=1 00:36:55.948 --rc genhtml_legend=1 00:36:55.948 --rc geninfo_all_blocks=1 00:36:55.948 --rc geninfo_unexecuted_blocks=1 00:36:55.948 00:36:55.948 ' 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.948 --rc genhtml_branch_coverage=1 00:36:55.948 --rc genhtml_function_coverage=1 00:36:55.948 --rc genhtml_legend=1 00:36:55.948 --rc geninfo_all_blocks=1 00:36:55.948 --rc geninfo_unexecuted_blocks=1 00:36:55.948 00:36:55.948 ' 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:55.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.948 --rc genhtml_branch_coverage=1 00:36:55.948 --rc genhtml_function_coverage=1 00:36:55.948 --rc genhtml_legend=1 00:36:55.948 --rc geninfo_all_blocks=1 00:36:55.948 --rc geninfo_unexecuted_blocks=1 00:36:55.948 00:36:55.948 ' 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:36:55.948 16:02:15 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:36:55.948 16:02:15 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:36:55.948 16:02:15 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:36:55.948 16:02:15 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:36:55.948 16:02:15 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:36:55.948 16:02:15 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:36:55.948 16:02:15 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:36:55.948 16:02:15 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:36:55.948 16:02:15 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:55.948 16:02:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:36:55.948 ************************************ 00:36:55.948 START TEST test_save_ublk_config 00:36:55.948 ************************************ 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70711 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70711 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70711 ']' 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:55.948 16:02:15 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:36:55.948 [2024-11-05 16:02:15.251914] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:55.948 [2024-11-05 16:02:15.252028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70711 ] 00:36:55.948 [2024-11-05 16:02:15.413064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.948 [2024-11-05 16:02:15.510781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.948 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:55.948 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:36:55.948 16:02:16 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:36:55.948 16:02:16 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:36:55.948 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.948 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:55.948 [2024-11-05 16:02:16.146776] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:55.948 [2024-11-05 16:02:16.148028] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:55.948 malloc0 00:36:55.948 [2024-11-05 16:02:16.234920] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:36:55.948 [2024-11-05 16:02:16.235027] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:36:55.948 [2024-11-05 16:02:16.235077] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:36:55.948 [2024-11-05 16:02:16.235089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:36:55.948 [2024-11-05 16:02:16.243069] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:36:55.949 [2024-11-05 16:02:16.243096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:36:55.949 [2024-11-05 16:02:16.250763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:36:55.949 [2024-11-05 16:02:16.250856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:36:55.949 [2024-11-05 16:02:16.267759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:36:55.949 0 00:36:55.949 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.949 16:02:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:36:55.949 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.949 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:55.949 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.949 16:02:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:36:55.949 "subsystems": [ 00:36:55.949 { 00:36:55.949 "subsystem": "fsdev", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "fsdev_set_opts", 00:36:55.949 "params": { 00:36:55.949 "fsdev_io_pool_size": 65535, 00:36:55.949 "fsdev_io_cache_size": 256 00:36:55.949 } 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "keyring", 00:36:55.949 "config": [] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "iobuf", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "iobuf_set_options", 00:36:55.949 "params": { 00:36:55.949 "small_pool_count": 8192, 00:36:55.949 "large_pool_count": 1024, 00:36:55.949 "small_bufsize": 8192, 00:36:55.949 "large_bufsize": 135168, 00:36:55.949 "enable_numa": false 00:36:55.949 } 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "sock", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "sock_set_default_impl", 00:36:55.949 "params": { 00:36:55.949 "impl_name": "posix" 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "sock_impl_set_options", 00:36:55.949 "params": { 00:36:55.949 "impl_name": "ssl", 00:36:55.949 "recv_buf_size": 4096, 00:36:55.949 "send_buf_size": 4096, 00:36:55.949 "enable_recv_pipe": true, 00:36:55.949 "enable_quickack": false, 00:36:55.949 "enable_placement_id": 0, 00:36:55.949 "enable_zerocopy_send_server": true, 00:36:55.949 "enable_zerocopy_send_client": false, 00:36:55.949 "zerocopy_threshold": 0, 00:36:55.949 "tls_version": 0, 00:36:55.949 "enable_ktls": false 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "sock_impl_set_options", 00:36:55.949 "params": { 00:36:55.949 "impl_name": "posix", 00:36:55.949 "recv_buf_size": 2097152, 00:36:55.949 "send_buf_size": 2097152, 00:36:55.949 "enable_recv_pipe": true, 00:36:55.949 "enable_quickack": false, 00:36:55.949 "enable_placement_id": 0, 00:36:55.949 "enable_zerocopy_send_server": true, 00:36:55.949 "enable_zerocopy_send_client": false, 00:36:55.949 "zerocopy_threshold": 0, 00:36:55.949 "tls_version": 0, 00:36:55.949 "enable_ktls": false 00:36:55.949 } 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "vmd", 00:36:55.949 "config": [] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "accel", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "accel_set_options", 00:36:55.949 "params": { 00:36:55.949 "small_cache_size": 128, 00:36:55.949 "large_cache_size": 16, 00:36:55.949 "task_count": 2048, 00:36:55.949 "sequence_count": 2048, 00:36:55.949 "buf_count": 2048 00:36:55.949 } 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "bdev", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "bdev_set_options", 00:36:55.949 "params": { 00:36:55.949 "bdev_io_pool_size": 65535, 00:36:55.949 "bdev_io_cache_size": 256, 00:36:55.949 "bdev_auto_examine": true, 00:36:55.949 "iobuf_small_cache_size": 128, 00:36:55.949 "iobuf_large_cache_size": 16 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "bdev_raid_set_options", 00:36:55.949 "params": { 00:36:55.949 "process_window_size_kb": 1024, 00:36:55.949 "process_max_bandwidth_mb_sec": 0 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "bdev_iscsi_set_options", 00:36:55.949 "params": { 00:36:55.949 "timeout_sec": 30 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "bdev_nvme_set_options", 00:36:55.949 "params": { 00:36:55.949 "action_on_timeout": "none", 00:36:55.949 "timeout_us": 0, 00:36:55.949 "timeout_admin_us": 0, 00:36:55.949 "keep_alive_timeout_ms": 10000, 00:36:55.949 "arbitration_burst": 0, 00:36:55.949 "low_priority_weight": 0, 00:36:55.949 "medium_priority_weight": 0, 00:36:55.949 "high_priority_weight": 0, 00:36:55.949 "nvme_adminq_poll_period_us": 10000, 00:36:55.949 "nvme_ioq_poll_period_us": 0, 00:36:55.949 "io_queue_requests": 0, 00:36:55.949 "delay_cmd_submit": true, 00:36:55.949 "transport_retry_count": 4, 00:36:55.949 "bdev_retry_count": 3, 00:36:55.949 "transport_ack_timeout": 0, 00:36:55.949 "ctrlr_loss_timeout_sec": 0, 00:36:55.949 "reconnect_delay_sec": 0, 00:36:55.949 "fast_io_fail_timeout_sec": 0, 00:36:55.949 "disable_auto_failback": false, 00:36:55.949 "generate_uuids": false, 00:36:55.949 "transport_tos": 0, 00:36:55.949 "nvme_error_stat": false, 00:36:55.949 "rdma_srq_size": 0, 00:36:55.949 "io_path_stat": false, 00:36:55.949 "allow_accel_sequence": false, 00:36:55.949 "rdma_max_cq_size": 0, 00:36:55.949 "rdma_cm_event_timeout_ms": 0, 00:36:55.949 "dhchap_digests": [ 00:36:55.949 "sha256", 00:36:55.949 "sha384", 00:36:55.949 "sha512" 00:36:55.949 ], 00:36:55.949 "dhchap_dhgroups": [ 00:36:55.949 "null", 00:36:55.949 "ffdhe2048", 00:36:55.949 "ffdhe3072", 00:36:55.949 "ffdhe4096", 00:36:55.949 "ffdhe6144", 00:36:55.949 "ffdhe8192" 00:36:55.949 ] 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "bdev_nvme_set_hotplug", 00:36:55.949 "params": { 00:36:55.949 "period_us": 100000, 00:36:55.949 "enable": false 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "bdev_malloc_create", 00:36:55.949 "params": { 00:36:55.949 "name": "malloc0", 00:36:55.949 "num_blocks": 8192, 00:36:55.949 "block_size": 4096, 00:36:55.949 "physical_block_size": 4096, 00:36:55.949 "uuid": "0975a8b8-cda1-4cbe-8a8d-1f644c970157", 00:36:55.949 "optimal_io_boundary": 0, 00:36:55.949 "md_size": 0, 00:36:55.949 "dif_type": 0, 00:36:55.949 "dif_is_head_of_md": false, 00:36:55.949 "dif_pi_format": 0 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "bdev_wait_for_examine" 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "scsi", 00:36:55.949 "config": null 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "scheduler", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "framework_set_scheduler", 00:36:55.949 "params": { 00:36:55.949 "name": "static" 00:36:55.949 } 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "vhost_scsi", 00:36:55.949 "config": [] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "vhost_blk", 00:36:55.949 "config": [] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "ublk", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "ublk_create_target", 00:36:55.949 "params": { 00:36:55.949 "cpumask": "1" 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "ublk_start_disk", 00:36:55.949 "params": { 00:36:55.949 "bdev_name": "malloc0", 00:36:55.949 "ublk_id": 0, 00:36:55.949 "num_queues": 1, 00:36:55.949 "queue_depth": 128 00:36:55.949 } 00:36:55.949 } 00:36:55.949 ] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "nbd", 00:36:55.949 "config": [] 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "subsystem": "nvmf", 00:36:55.949 "config": [ 00:36:55.949 { 00:36:55.949 "method": "nvmf_set_config", 00:36:55.949 "params": { 00:36:55.949 "discovery_filter": "match_any", 00:36:55.949 "admin_cmd_passthru": { 00:36:55.949 "identify_ctrlr": false 00:36:55.949 }, 00:36:55.949 "dhchap_digests": [ 00:36:55.949 "sha256", 00:36:55.949 "sha384", 00:36:55.949 "sha512" 00:36:55.949 ], 00:36:55.949 "dhchap_dhgroups": [ 00:36:55.949 "null", 00:36:55.949 "ffdhe2048", 00:36:55.949 "ffdhe3072", 00:36:55.949 "ffdhe4096", 00:36:55.949 "ffdhe6144", 00:36:55.949 "ffdhe8192" 00:36:55.949 ] 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "nvmf_set_max_subsystems", 00:36:55.949 "params": { 00:36:55.949 "max_subsystems": 1024 00:36:55.949 } 00:36:55.949 }, 00:36:55.949 { 00:36:55.949 "method": "nvmf_set_crdt", 00:36:55.950 "params": { 00:36:55.950 "crdt1": 0, 00:36:55.950 "crdt2": 0, 00:36:55.950 "crdt3": 0 00:36:55.950 } 00:36:55.950 } 00:36:55.950 ] 00:36:55.950 }, 00:36:55.950 { 00:36:55.950 "subsystem": "iscsi", 00:36:55.950 "config": [ 00:36:55.950 { 00:36:55.950 "method": "iscsi_set_options", 00:36:55.950 "params": { 00:36:55.950 "node_base": "iqn.2016-06.io.spdk", 00:36:55.950 "max_sessions": 128, 00:36:55.950 "max_connections_per_session": 2, 00:36:55.950 "max_queue_depth": 64, 00:36:55.950 "default_time2wait": 2, 00:36:55.950 "default_time2retain": 20, 00:36:55.950 "first_burst_length": 8192, 00:36:55.950 "immediate_data": true, 00:36:55.950 "allow_duplicated_isid": false, 00:36:55.950 "error_recovery_level": 0, 00:36:55.950 "nop_timeout": 60, 00:36:55.950 "nop_in_interval": 30, 00:36:55.950 "disable_chap": false, 00:36:55.950 "require_chap": false, 00:36:55.950 "mutual_chap": false, 00:36:55.950 "chap_group": 0, 00:36:55.950 "max_large_datain_per_connection": 64, 00:36:55.950 "max_r2t_per_connection": 4, 00:36:55.950 "pdu_pool_size": 36864, 00:36:55.950 "immediate_data_pool_size": 16384, 00:36:55.950 "data_out_pool_size": 2048 00:36:55.950 } 00:36:55.950 } 00:36:55.950 ] 00:36:55.950 } 00:36:55.950 ] 00:36:55.950 }' 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70711 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70711 ']' 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70711 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70711 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:55.950 killing process with pid 70711 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70711' 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70711 00:36:55.950 16:02:16 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70711 00:36:56.545 [2024-11-05 16:02:17.593409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:36:56.545 [2024-11-05 16:02:17.621773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:36:56.545 [2024-11-05 16:02:17.621879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:36:56.545 [2024-11-05 16:02:17.630778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:36:56.545 [2024-11-05 16:02:17.630846] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:36:56.545 [2024-11-05 16:02:17.630862] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:36:56.545 [2024-11-05 16:02:17.630889] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:36:56.545 [2024-11-05 16:02:17.631050] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70762 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70762 00:36:57.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70762 ']' 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:36:57.918 16:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:36:57.918 "subsystems": [ 00:36:57.918 { 00:36:57.918 "subsystem": "fsdev", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "fsdev_set_opts", 00:36:57.918 "params": { 00:36:57.918 "fsdev_io_pool_size": 65535, 00:36:57.918 "fsdev_io_cache_size": 256 00:36:57.918 } 00:36:57.918 } 00:36:57.918 ] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "keyring", 00:36:57.918 "config": [] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "iobuf", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "iobuf_set_options", 00:36:57.918 "params": { 00:36:57.918 "small_pool_count": 8192, 00:36:57.918 "large_pool_count": 1024, 00:36:57.918 "small_bufsize": 8192, 00:36:57.918 "large_bufsize": 135168, 00:36:57.918 "enable_numa": false 00:36:57.918 } 00:36:57.918 } 00:36:57.918 ] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "sock", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "sock_set_default_impl", 00:36:57.918 "params": { 00:36:57.918 "impl_name": "posix" 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "sock_impl_set_options", 00:36:57.918 "params": { 00:36:57.918 "impl_name": "ssl", 00:36:57.918 "recv_buf_size": 4096, 00:36:57.918 "send_buf_size": 4096, 00:36:57.918 "enable_recv_pipe": true, 00:36:57.918 "enable_quickack": false, 00:36:57.918 "enable_placement_id": 0, 00:36:57.918 "enable_zerocopy_send_server": true, 00:36:57.918 "enable_zerocopy_send_client": false, 00:36:57.918 "zerocopy_threshold": 0, 00:36:57.918 "tls_version": 0, 00:36:57.918 "enable_ktls": false 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "sock_impl_set_options", 00:36:57.918 "params": { 00:36:57.918 "impl_name": "posix", 00:36:57.918 "recv_buf_size": 2097152, 00:36:57.918 "send_buf_size": 2097152, 00:36:57.918 "enable_recv_pipe": true, 00:36:57.918 "enable_quickack": false, 00:36:57.918 "enable_placement_id": 0, 00:36:57.918 "enable_zerocopy_send_server": true, 00:36:57.918 "enable_zerocopy_send_client": false, 00:36:57.918 "zerocopy_threshold": 0, 00:36:57.918 "tls_version": 0, 00:36:57.918 "enable_ktls": false 00:36:57.918 } 00:36:57.918 } 00:36:57.918 ] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "vmd", 00:36:57.918 "config": [] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "accel", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "accel_set_options", 00:36:57.918 "params": { 00:36:57.918 "small_cache_size": 128, 00:36:57.918 "large_cache_size": 16, 00:36:57.918 "task_count": 2048, 00:36:57.918 "sequence_count": 2048, 00:36:57.918 "buf_count": 2048 00:36:57.918 } 00:36:57.918 } 00:36:57.918 ] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "bdev", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "bdev_set_options", 00:36:57.918 "params": { 00:36:57.918 "bdev_io_pool_size": 65535, 00:36:57.918 "bdev_io_cache_size": 256, 00:36:57.918 "bdev_auto_examine": true, 00:36:57.918 "iobuf_small_cache_size": 128, 00:36:57.918 "iobuf_large_cache_size": 16 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "bdev_raid_set_options", 00:36:57.918 "params": { 00:36:57.918 "process_window_size_kb": 1024, 00:36:57.918 "process_max_bandwidth_mb_sec": 0 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "bdev_iscsi_set_options", 00:36:57.918 "params": { 00:36:57.918 "timeout_sec": 30 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "bdev_nvme_set_options", 00:36:57.918 "params": { 00:36:57.918 "action_on_timeout": "none", 00:36:57.918 "timeout_us": 0, 00:36:57.918 "timeout_admin_us": 0, 00:36:57.918 "keep_alive_timeout_ms": 10000, 00:36:57.918 "arbitration_burst": 0, 00:36:57.918 "low_priority_weight": 0, 00:36:57.918 "medium_priority_weight": 0, 00:36:57.918 "high_priority_weight": 0, 00:36:57.918 "nvme_adminq_poll_period_us": 10000, 00:36:57.918 "nvme_ioq_poll_period_us": 0, 00:36:57.918 "io_queue_requests": 0, 00:36:57.918 "delay_cmd_submit": true, 00:36:57.918 "transport_retry_count": 4, 00:36:57.918 "bdev_retry_count": 3, 00:36:57.918 "transport_ack_timeout": 0, 00:36:57.918 "ctrlr_loss_timeout_sec": 0, 00:36:57.918 "reconnect_delay_sec": 0, 00:36:57.918 "fast_io_fail_timeout_sec": 0, 00:36:57.918 "disable_auto_failback": false, 00:36:57.918 "generate_uuids": false, 00:36:57.918 "transport_tos": 0, 00:36:57.918 "nvme_error_stat": false, 00:36:57.918 "rdma_srq_size": 0, 00:36:57.918 "io_path_stat": false, 00:36:57.918 "allow_accel_sequence": false, 00:36:57.918 "rdma_max_cq_size": 0, 00:36:57.918 "rdma_cm_event_timeout_ms": 0, 00:36:57.918 "dhchap_digests": [ 00:36:57.918 "sha256", 00:36:57.918 "sha384", 00:36:57.918 "sha512" 00:36:57.918 ], 00:36:57.918 "dhchap_dhgroups": [ 00:36:57.918 "null", 00:36:57.918 "ffdhe2048", 00:36:57.918 "ffdhe3072", 00:36:57.918 "ffdhe4096", 00:36:57.918 "ffdhe6144", 00:36:57.918 "ffdhe8192" 00:36:57.918 ] 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "bdev_nvme_set_hotplug", 00:36:57.918 "params": { 00:36:57.918 "period_us": 100000, 00:36:57.918 "enable": false 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "bdev_malloc_create", 00:36:57.918 "params": { 00:36:57.918 "name": "malloc0", 00:36:57.918 "num_blocks": 8192, 00:36:57.918 "block_size": 4096, 00:36:57.918 "physical_block_size": 4096, 00:36:57.918 "uuid": "0975a8b8-cda1-4cbe-8a8d-1f644c970157", 00:36:57.918 "optimal_io_boundary": 0, 00:36:57.918 "md_size": 0, 00:36:57.918 "dif_type": 0, 00:36:57.918 "dif_is_head_of_md": false, 00:36:57.918 "dif_pi_format": 0 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "bdev_wait_for_examine" 00:36:57.918 } 00:36:57.918 ] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "scsi", 00:36:57.918 "config": null 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "scheduler", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "framework_set_scheduler", 00:36:57.918 "params": { 00:36:57.918 "name": "static" 00:36:57.918 } 00:36:57.918 } 00:36:57.918 ] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "vhost_scsi", 00:36:57.918 "config": [] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "vhost_blk", 00:36:57.918 "config": [] 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "subsystem": "ublk", 00:36:57.918 "config": [ 00:36:57.918 { 00:36:57.918 "method": "ublk_create_target", 00:36:57.918 "params": { 00:36:57.918 "cpumask": "1" 00:36:57.918 } 00:36:57.918 }, 00:36:57.918 { 00:36:57.918 "method": "ublk_start_disk", 00:36:57.919 "params": { 00:36:57.919 "bdev_name": "malloc0", 00:36:57.919 "ublk_id": 0, 00:36:57.919 "num_queues": 1, 00:36:57.919 "queue_depth": 128 00:36:57.919 } 00:36:57.919 } 00:36:57.919 ] 00:36:57.919 }, 00:36:57.919 { 00:36:57.919 "subsystem": "nbd", 00:36:57.919 "config": [] 00:36:57.919 }, 00:36:57.919 { 00:36:57.919 "subsystem": "nvmf", 00:36:57.919 "config": [ 00:36:57.919 { 00:36:57.919 "method": "nvmf_set_config", 00:36:57.919 "params": { 00:36:57.919 "discovery_filter": "match_any", 00:36:57.919 "admin_cmd_passthru": { 00:36:57.919 "identify_ctrlr": false 00:36:57.919 }, 00:36:57.919 "dhchap_digests": [ 00:36:57.919 "sha256", 00:36:57.919 "sha384", 00:36:57.919 "sha512" 00:36:57.919 ], 00:36:57.919 "dhchap_dhgroups": [ 00:36:57.919 "null", 00:36:57.919 "ffdhe2048", 00:36:57.919 "ffdhe3072", 00:36:57.919 "ffdhe4096", 00:36:57.919 "ffdhe6144", 00:36:57.919 "ffdhe8192" 00:36:57.919 ] 00:36:57.919 } 00:36:57.919 }, 00:36:57.919 { 00:36:57.919 "method": "nvmf_set_max_subsystems", 00:36:57.919 "params": { 00:36:57.919 "max_subsystems": 1024 00:36:57.919 } 00:36:57.919 }, 00:36:57.919 { 00:36:57.919 "method": "nvmf_set_crdt", 00:36:57.919 "params": { 00:36:57.919 "crdt1": 0, 00:36:57.919 "crdt2": 0, 00:36:57.919 "crdt3": 0 00:36:57.919 } 00:36:57.919 } 00:36:57.919 ] 00:36:57.919 }, 00:36:57.919 { 00:36:57.919 "subsystem": "iscsi", 00:36:57.919 "config": [ 00:36:57.919 { 00:36:57.919 "method": "iscsi_set_options", 00:36:57.919 "params": { 00:36:57.919 "node_base": "iqn.2016-06.io.spdk", 00:36:57.919 "max_sessions": 128, 00:36:57.919 "max_connections_per_session": 2, 00:36:57.919 "max_queue_depth": 64, 00:36:57.919 "default_time2wait": 2, 00:36:57.919 "default_time2retain": 20, 00:36:57.919 "first_burst_length": 8192, 00:36:57.919 "immediate_data": true, 00:36:57.919 "allow_duplicated_isid": false, 00:36:57.919 "error_recovery_level": 0, 00:36:57.919 "nop_timeout": 60, 00:36:57.919 "nop_in_interval": 30, 00:36:57.919 "disable_chap": false, 00:36:57.919 "require_chap": false, 00:36:57.919 "mutual_chap": false, 00:36:57.919 "chap_group": 0, 00:36:57.919 "max_large_datain_per_connection": 64, 00:36:57.919 "max_r2t_per_connection": 4, 00:36:57.919 "pdu_pool_size": 36864, 00:36:57.919 "immediate_data_pool_size": 16384, 00:36:57.919 "data_out_pool_size": 2048 00:36:57.919 } 00:36:57.919 } 00:36:57.919 ] 00:36:57.919 } 00:36:57.919 ] 00:36:57.919 }' 00:36:57.919 [2024-11-05 16:02:18.948906] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:36:57.919 [2024-11-05 16:02:18.949027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:36:57.919 [2024-11-05 16:02:19.106466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.919 [2024-11-05 16:02:19.202134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.855 [2024-11-05 16:02:19.948754] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:58.855 [2024-11-05 16:02:19.949571] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:58.855 [2024-11-05 16:02:19.956869] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:36:58.855 [2024-11-05 16:02:19.956937] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:36:58.855 [2024-11-05 16:02:19.956946] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:36:58.855 [2024-11-05 16:02:19.956952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:36:58.855 [2024-11-05 16:02:19.964866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:36:58.855 [2024-11-05 16:02:19.964887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:36:58.855 [2024-11-05 16:02:19.972760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:36:58.855 [2024-11-05 16:02:19.972849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:36:58.855 [2024-11-05 16:02:19.989766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70762 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70762 ']' 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70762 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70762 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:58.855 killing process with pid 70762 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70762' 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70762 00:36:58.855 16:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70762 00:37:00.265 [2024-11-05 16:02:21.231902] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:00.265 [2024-11-05 16:02:21.272827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:00.265 [2024-11-05 16:02:21.272944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:00.265 [2024-11-05 16:02:21.279761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:00.265 [2024-11-05 16:02:21.279806] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:00.265 [2024-11-05 16:02:21.279813] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:00.265 [2024-11-05 16:02:21.279838] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:00.265 [2024-11-05 16:02:21.279969] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:01.200 16:02:22 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:37:01.200 ************************************ 00:37:01.200 END TEST test_save_ublk_config 00:37:01.200 ************************************ 00:37:01.200 00:37:01.200 real 0m7.281s 00:37:01.200 user 0m5.058s 00:37:01.200 sys 0m2.834s 00:37:01.200 16:02:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:01.200 16:02:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:01.200 16:02:22 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70841 00:37:01.200 16:02:22 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:37:01.200 16:02:22 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:01.200 16:02:22 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70841 00:37:01.200 16:02:22 ublk -- common/autotest_common.sh@833 -- # '[' -z 70841 ']' 00:37:01.200 16:02:22 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.200 16:02:22 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:01.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.200 16:02:22 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.200 16:02:22 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:01.200 16:02:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:01.200 [2024-11-05 16:02:22.559247] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:37:01.200 [2024-11-05 16:02:22.559367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70841 ] 00:37:01.459 [2024-11-05 16:02:22.713432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:01.459 [2024-11-05 16:02:22.792576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.459 [2024-11-05 16:02:22.792712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.026 16:02:23 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:02.026 16:02:23 ublk -- common/autotest_common.sh@866 -- # return 0 00:37:02.285 16:02:23 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:37:02.285 16:02:23 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:02.285 16:02:23 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:02.285 16:02:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:02.285 ************************************ 00:37:02.285 START TEST test_create_ublk 00:37:02.285 ************************************ 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:37:02.285 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:02.285 [2024-11-05 16:02:23.407752] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:02.285 [2024-11-05 16:02:23.409289] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.285 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:37:02.285 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.285 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:37:02.285 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.285 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:02.285 [2024-11-05 16:02:23.559859] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:37:02.285 [2024-11-05 16:02:23.560155] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:37:02.285 [2024-11-05 16:02:23.560169] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:37:02.286 [2024-11-05 16:02:23.560175] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:37:02.286 [2024-11-05 16:02:23.567960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:02.286 [2024-11-05 16:02:23.567978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:02.286 [2024-11-05 16:02:23.575761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:02.286 [2024-11-05 16:02:23.585795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:37:02.286 [2024-11-05 16:02:23.611768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:37:02.286 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.286 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:37:02.286 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:37:02.286 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:37:02.286 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.286 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:02.286 16:02:23 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.286 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:37:02.286 { 00:37:02.286 "ublk_device": "/dev/ublkb0", 00:37:02.286 "id": 0, 00:37:02.286 "queue_depth": 512, 00:37:02.286 "num_queues": 4, 00:37:02.286 "bdev_name": "Malloc0" 00:37:02.286 } 00:37:02.286 ]' 00:37:02.286 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:37:02.545 16:02:23 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:37:02.545 16:02:23 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:37:02.545 fio: verification read phase will never start because write phase uses all of runtime 00:37:02.545 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:37:02.545 fio-3.35 00:37:02.545 Starting 1 process 00:37:14.749 00:37:14.749 fio_test: (groupid=0, jobs=1): err= 0: pid=70882: Tue Nov 5 16:02:34 2024 00:37:14.749 write: IOPS=19.8k, BW=77.2MiB/s (81.0MB/s)(773MiB/10001msec); 0 zone resets 00:37:14.749 clat (usec): min=32, max=4028, avg=49.76, stdev=80.73 00:37:14.750 lat (usec): min=32, max=4028, avg=50.22, stdev=80.75 00:37:14.750 clat percentiles (usec): 00:37:14.750 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 44], 00:37:14.750 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:37:14.750 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 55], 95.00th=[ 60], 00:37:14.750 | 99.00th=[ 70], 99.50th=[ 77], 99.90th=[ 1287], 99.95th=[ 2409], 00:37:14.750 | 99.99th=[ 3490] 00:37:14.750 bw ( KiB/s): min=73824, max=86344, per=99.96%, avg=79066.95, stdev=2851.46, samples=19 00:37:14.750 iops : min=18456, max=21586, avg=19766.74, stdev=712.86, samples=19 00:37:14.750 lat (usec) : 50=80.81%, 100=18.88%, 250=0.14%, 500=0.04%, 750=0.01% 00:37:14.750 lat (usec) : 1000=0.01% 00:37:14.750 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:37:14.750 cpu : usr=3.46%, sys=16.55%, ctx=197758, majf=0, minf=795 00:37:14.750 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:14.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:14.750 issued rwts: total=0,197761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:14.750 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:14.750 00:37:14.750 Run status group 0 (all jobs): 00:37:14.750 WRITE: bw=77.2MiB/s (81.0MB/s), 77.2MiB/s-77.2MiB/s (81.0MB/s-81.0MB/s), io=773MiB (810MB), run=10001-10001msec 00:37:14.750 00:37:14.750 Disk stats (read/write): 00:37:14.750 ublkb0: ios=0/195668, merge=0/0, ticks=0/8023, in_queue=8024, util=99.10% 00:37:14.750 16:02:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-11-05 16:02:34.032600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:14.750 [2024-11-05 16:02:34.071195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:14.750 [2024-11-05 16:02:34.072069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:14.750 [2024-11-05 16:02:34.079763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:14.750 [2024-11-05 16:02:34.079996] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:14.750 [2024-11-05 16:02:34.080011] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-11-05 16:02:34.095806] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:37:14.750 request: 00:37:14.750 { 00:37:14.750 "ublk_id": 0, 00:37:14.750 "method": "ublk_stop_disk", 00:37:14.750 "req_id": 1 00:37:14.750 } 00:37:14.750 Got JSON-RPC error response 00:37:14.750 response: 00:37:14.750 { 00:37:14.750 "code": -19, 00:37:14.750 "message": "No such device" 00:37:14.750 } 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:14.750 16:02:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-11-05 16:02:34.119813] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:14.750 [2024-11-05 16:02:34.123354] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:14.750 [2024-11-05 16:02:34.123386] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:37:14.750 ************************************ 00:37:14.750 END TEST test_create_ublk 00:37:14.750 ************************************ 00:37:14.750 16:02:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:37:14.750 00:37:14.750 real 0m11.181s 00:37:14.750 user 0m0.643s 00:37:14.750 sys 0m1.736s 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 16:02:34 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:37:14.750 16:02:34 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:14.750 16:02:34 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:14.750 16:02:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 ************************************ 00:37:14.750 START TEST test_create_multi_ublk 00:37:14.750 ************************************ 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-11-05 16:02:34.631743] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:14.750 [2024-11-05 16:02:34.633337] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.750 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.750 [2024-11-05 16:02:34.844030] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:37:14.750 [2024-11-05 16:02:34.844329] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:37:14.750 [2024-11-05 16:02:34.844341] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:37:14.750 [2024-11-05 16:02:34.844349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:37:14.750 [2024-11-05 16:02:34.855957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:14.750 [2024-11-05 16:02:34.855979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:14.750 [2024-11-05 16:02:34.867751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:14.750 [2024-11-05 16:02:34.868260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:37:14.750 [2024-11-05 16:02:34.907760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:37:14.751 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:37:14.751 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:37:14.751 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 [2024-11-05 16:02:35.123850] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:37:14.751 [2024-11-05 16:02:35.124143] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:37:14.751 [2024-11-05 16:02:35.124157] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:37:14.751 [2024-11-05 16:02:35.124162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:37:14.751 [2024-11-05 16:02:35.131777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:14.751 [2024-11-05 16:02:35.131794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:14.751 [2024-11-05 16:02:35.139760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:14.751 [2024-11-05 16:02:35.140258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:37:14.751 [2024-11-05 16:02:35.148782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 [2024-11-05 16:02:35.307835] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:37:14.751 [2024-11-05 16:02:35.308131] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:37:14.751 [2024-11-05 16:02:35.308143] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:37:14.751 [2024-11-05 16:02:35.308149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:37:14.751 [2024-11-05 16:02:35.315771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:14.751 [2024-11-05 16:02:35.315791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:14.751 [2024-11-05 16:02:35.323757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:14.751 [2024-11-05 16:02:35.324254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:37:14.751 [2024-11-05 16:02:35.332755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 [2024-11-05 16:02:35.491857] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:37:14.751 [2024-11-05 16:02:35.492155] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:37:14.751 [2024-11-05 16:02:35.492169] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:37:14.751 [2024-11-05 16:02:35.492174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:37:14.751 [2024-11-05 16:02:35.499767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:14.751 [2024-11-05 16:02:35.499784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:14.751 [2024-11-05 16:02:35.507759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:14.751 [2024-11-05 16:02:35.508251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:37:14.751 [2024-11-05 16:02:35.516777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:37:14.751 { 00:37:14.751 "ublk_device": "/dev/ublkb0", 00:37:14.751 "id": 0, 00:37:14.751 "queue_depth": 512, 00:37:14.751 "num_queues": 4, 00:37:14.751 "bdev_name": "Malloc0" 00:37:14.751 }, 00:37:14.751 { 00:37:14.751 "ublk_device": "/dev/ublkb1", 00:37:14.751 "id": 1, 00:37:14.751 "queue_depth": 512, 00:37:14.751 "num_queues": 4, 00:37:14.751 "bdev_name": "Malloc1" 00:37:14.751 }, 00:37:14.751 { 00:37:14.751 "ublk_device": "/dev/ublkb2", 00:37:14.751 "id": 2, 00:37:14.751 "queue_depth": 512, 00:37:14.751 "num_queues": 4, 00:37:14.751 "bdev_name": "Malloc2" 00:37:14.751 }, 00:37:14.751 { 00:37:14.751 "ublk_device": "/dev/ublkb3", 00:37:14.751 "id": 3, 00:37:14.751 "queue_depth": 512, 00:37:14.751 "num_queues": 4, 00:37:14.751 "bdev_name": "Malloc3" 00:37:14.751 } 00:37:14.751 ]' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:14.751 16:02:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:37:14.751 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:15.010 [2024-11-05 16:02:36.195838] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:15.010 [2024-11-05 16:02:36.227765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:15.010 [2024-11-05 16:02:36.228495] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:15.010 [2024-11-05 16:02:36.235833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:15.010 [2024-11-05 16:02:36.236062] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:15.010 [2024-11-05 16:02:36.236075] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:37:15.010 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:15.011 [2024-11-05 16:02:36.251809] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:37:15.011 [2024-11-05 16:02:36.287786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:15.011 [2024-11-05 16:02:36.288426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:37:15.011 [2024-11-05 16:02:36.295766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:15.011 [2024-11-05 16:02:36.295991] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:37:15.011 [2024-11-05 16:02:36.296003] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:15.011 [2024-11-05 16:02:36.311820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:37:15.011 [2024-11-05 16:02:36.341183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:15.011 [2024-11-05 16:02:36.342095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:37:15.011 [2024-11-05 16:02:36.351764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:15.011 [2024-11-05 16:02:36.351978] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:37:15.011 [2024-11-05 16:02:36.351991] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.011 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:15.011 [2024-11-05 16:02:36.367818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:37:15.269 [2024-11-05 16:02:36.399796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:15.269 [2024-11-05 16:02:36.400347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:37:15.269 [2024-11-05 16:02:36.407757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:15.269 [2024-11-05 16:02:36.407974] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:37:15.269 [2024-11-05 16:02:36.407986] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:37:15.269 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.269 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:37:15.269 [2024-11-05 16:02:36.607814] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:15.269 [2024-11-05 16:02:36.611420] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:15.269 [2024-11-05 16:02:36.611449] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:37:15.527 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:37:15.527 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:15.527 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:37:15.527 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.527 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:15.786 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.786 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:15.786 16:02:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:37:15.786 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.786 16:02:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:16.044 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.044 16:02:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:16.044 16:02:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:37:16.044 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.044 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:16.302 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.302 16:02:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:37:16.302 16:02:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:37:16.302 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.302 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:37:16.560 ************************************ 00:37:16.560 END TEST test_create_multi_ublk 00:37:16.560 ************************************ 00:37:16.560 16:02:37 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:37:16.560 00:37:16.560 real 0m3.187s 00:37:16.560 user 0m0.847s 00:37:16.560 sys 0m0.126s 00:37:16.561 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:16.561 16:02:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:37:16.561 16:02:37 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:16.561 16:02:37 ublk -- ublk/ublk.sh@147 -- # cleanup 00:37:16.561 16:02:37 ublk -- ublk/ublk.sh@130 -- # killprocess 70841 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@952 -- # '[' -z 70841 ']' 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@956 -- # kill -0 70841 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@957 -- # uname 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70841 00:37:16.561 killing process with pid 70841 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70841' 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@971 -- # kill 70841 00:37:16.561 16:02:37 ublk -- common/autotest_common.sh@976 -- # wait 70841 00:37:17.126 [2024-11-05 16:02:38.377032] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:17.126 [2024-11-05 16:02:38.377081] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:17.692 00:37:17.692 real 0m23.995s 00:37:17.692 user 0m34.543s 00:37:17.692 sys 0m9.695s 00:37:17.693 16:02:39 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:17.693 ************************************ 00:37:17.693 END TEST ublk 00:37:17.693 16:02:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:17.693 ************************************ 00:37:17.693 16:02:39 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:37:17.693 16:02:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:17.693 16:02:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:17.693 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:37:17.693 ************************************ 00:37:17.693 START TEST ublk_recovery 00:37:17.693 ************************************ 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:37:17.954 * Looking for test storage... 00:37:17.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.954 16:02:39 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:17.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.954 --rc genhtml_branch_coverage=1 00:37:17.954 --rc genhtml_function_coverage=1 00:37:17.954 --rc genhtml_legend=1 00:37:17.954 --rc geninfo_all_blocks=1 00:37:17.954 --rc geninfo_unexecuted_blocks=1 00:37:17.954 00:37:17.954 ' 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:17.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.954 --rc genhtml_branch_coverage=1 00:37:17.954 --rc genhtml_function_coverage=1 00:37:17.954 --rc genhtml_legend=1 00:37:17.954 --rc geninfo_all_blocks=1 00:37:17.954 --rc geninfo_unexecuted_blocks=1 00:37:17.954 00:37:17.954 ' 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:17.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.954 --rc genhtml_branch_coverage=1 00:37:17.954 --rc genhtml_function_coverage=1 00:37:17.954 --rc genhtml_legend=1 00:37:17.954 --rc geninfo_all_blocks=1 00:37:17.954 --rc geninfo_unexecuted_blocks=1 00:37:17.954 00:37:17.954 ' 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:17.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.954 --rc genhtml_branch_coverage=1 00:37:17.954 --rc genhtml_function_coverage=1 00:37:17.954 --rc genhtml_legend=1 00:37:17.954 --rc geninfo_all_blocks=1 00:37:17.954 --rc geninfo_unexecuted_blocks=1 00:37:17.954 00:37:17.954 ' 00:37:17.954 16:02:39 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:37:17.954 16:02:39 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:37:17.954 16:02:39 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:37:17.954 16:02:39 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71230 00:37:17.954 16:02:39 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:17.954 16:02:39 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71230 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71230 ']' 00:37:17.954 16:02:39 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:17.954 16:02:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:17.954 [2024-11-05 16:02:39.276483] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:37:17.954 [2024-11-05 16:02:39.276602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71230 ] 00:37:18.213 [2024-11-05 16:02:39.432423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:18.213 [2024-11-05 16:02:39.509722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.213 [2024-11-05 16:02:39.509792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:37:18.787 16:02:40 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:18.787 [2024-11-05 16:02:40.067753] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:18.787 [2024-11-05 16:02:40.069210] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.787 16:02:40 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:18.787 malloc0 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.787 16:02:40 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.787 16:02:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:19.045 [2024-11-05 16:02:40.151857] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:37:19.045 [2024-11-05 16:02:40.151934] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:37:19.045 [2024-11-05 16:02:40.151942] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:37:19.045 [2024-11-05 16:02:40.151949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:37:19.045 [2024-11-05 16:02:40.159764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:19.045 [2024-11-05 16:02:40.159780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:19.045 [2024-11-05 16:02:40.167761] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:19.045 [2024-11-05 16:02:40.167873] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:37:19.045 [2024-11-05 16:02:40.182757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:37:19.045 1 00:37:19.045 16:02:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.045 16:02:40 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:37:20.001 16:02:41 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71265 00:37:20.001 16:02:41 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:37:20.001 16:02:41 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:37:20.001 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:20.001 fio-3.35 00:37:20.001 Starting 1 process 00:37:25.266 16:02:46 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71230 00:37:25.266 16:02:46 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:37:30.570 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71230 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:37:30.570 16:02:51 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71376 00:37:30.570 16:02:51 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:30.570 16:02:51 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71376 00:37:30.570 16:02:51 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:37:30.570 16:02:51 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71376 ']' 00:37:30.570 16:02:51 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.570 16:02:51 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:30.570 16:02:51 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.570 16:02:51 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:30.570 16:02:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:30.570 [2024-11-05 16:02:51.279685] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:37:30.570 [2024-11-05 16:02:51.279817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71376 ] 00:37:30.570 [2024-11-05 16:02:51.438567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:30.570 [2024-11-05 16:02:51.541534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.570 [2024-11-05 16:02:51.541623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:37:30.831 16:02:52 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:30.831 [2024-11-05 16:02:52.129762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:30.831 [2024-11-05 16:02:52.131618] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.831 16:02:52 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.831 16:02:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:31.092 malloc0 00:37:31.092 16:02:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.092 16:02:52 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:37:31.092 16:02:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.092 16:02:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:31.092 [2024-11-05 16:02:52.230894] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:37:31.092 [2024-11-05 16:02:52.230930] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:37:31.092 [2024-11-05 16:02:52.230939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:37:31.092 [2024-11-05 16:02:52.238777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:37:31.092 [2024-11-05 16:02:52.238803] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:37:31.092 [2024-11-05 16:02:52.238812] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:37:31.092 [2024-11-05 16:02:52.238886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:37:31.092 1 00:37:31.092 16:02:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.092 16:02:52 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71265 00:37:31.092 [2024-11-05 16:02:52.246772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:37:31.092 [2024-11-05 16:02:52.253257] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:37:31.092 [2024-11-05 16:02:52.260950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:37:31.092 [2024-11-05 16:02:52.260972] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:38:27.366 00:38:27.366 fio_test: (groupid=0, jobs=1): err= 0: pid=71268: Tue Nov 5 16:03:41 2024 00:38:27.366 read: IOPS=28.4k, BW=111MiB/s (116MB/s)(6646MiB/60002msec) 00:38:27.366 slat (nsec): min=943, max=1064.8k, avg=4787.94, stdev=1694.55 00:38:27.366 clat (usec): min=740, max=6074.6k, avg=2237.66, stdev=38943.05 00:38:27.366 lat (usec): min=744, max=6074.6k, avg=2242.45, stdev=38943.05 00:38:27.366 clat percentiles (usec): 00:38:27.366 | 1.00th=[ 1647], 5.00th=[ 1762], 10.00th=[ 1795], 20.00th=[ 1827], 00:38:27.366 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1860], 60.00th=[ 1876], 00:38:27.366 | 70.00th=[ 1893], 80.00th=[ 1926], 90.00th=[ 1975], 95.00th=[ 2835], 00:38:27.366 | 99.00th=[ 4817], 99.50th=[ 5604], 99.90th=[ 6718], 99.95th=[ 7963], 00:38:27.366 | 99.99th=[12780] 00:38:27.366 bw ( KiB/s): min=14016, max=130864, per=100.00%, avg=124920.89, stdev=15809.30, samples=108 00:38:27.366 iops : min= 3504, max=32716, avg=31230.22, stdev=3952.32, samples=108 00:38:27.366 write: IOPS=28.3k, BW=111MiB/s (116MB/s)(6640MiB/60002msec); 0 zone resets 00:38:27.366 slat (nsec): min=953, max=276173, avg=4815.81, stdev=1539.07 00:38:27.366 clat (usec): min=515, max=6074.6k, avg=2267.74, stdev=35464.45 00:38:27.366 lat (usec): min=520, max=6074.6k, avg=2272.55, stdev=35464.45 00:38:27.366 clat percentiles (usec): 00:38:27.366 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:38:27.366 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:38:27.366 | 70.00th=[ 1991], 80.00th=[ 2008], 90.00th=[ 2057], 95.00th=[ 2737], 00:38:27.366 | 99.00th=[ 4817], 99.50th=[ 5669], 99.90th=[ 6652], 99.95th=[ 8029], 00:38:27.366 | 99.99th=[12780] 00:38:27.366 bw ( KiB/s): min=14288, max=130152, per=100.00%, avg=124824.37, stdev=15844.97, samples=108 00:38:27.366 iops : min= 3572, max=32538, avg=31206.09, stdev=3961.24, samples=108 00:38:27.366 lat (usec) : 750=0.01%, 1000=0.01% 00:38:27.366 lat (msec) : 2=84.53%, 4=13.03%, 10=2.42%, 20=0.02%, >=2000=0.01% 00:38:27.366 cpu : usr=6.22%, sys=27.99%, ctx=115687, majf=0, minf=13 00:38:27.366 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:38:27.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:27.366 issued rwts: total=1701388,1699956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.366 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:27.366 00:38:27.366 Run status group 0 (all jobs): 00:38:27.366 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6646MiB (6969MB), run=60002-60002msec 00:38:27.366 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6640MiB (6963MB), run=60002-60002msec 00:38:27.366 00:38:27.366 Disk stats (read/write): 00:38:27.366 ublkb1: ios=1697952/1696624, merge=0/0, ticks=3713021/3627877, in_queue=7340898, util=99.89% 00:38:27.366 16:03:41 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:38:27.366 16:03:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.366 16:03:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:27.366 [2024-11-05 16:03:41.452002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:38:27.366 [2024-11-05 16:03:41.487848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:27.366 [2024-11-05 16:03:41.487967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:38:27.366 [2024-11-05 16:03:41.495765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:27.366 [2024-11-05 16:03:41.495843] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:38:27.366 [2024-11-05 16:03:41.495851] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:38:27.366 16:03:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.366 16:03:41 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:38:27.366 16:03:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.366 16:03:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:27.366 [2024-11-05 16:03:41.509830] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:27.366 [2024-11-05 16:03:41.513401] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:27.366 [2024-11-05 16:03:41.513432] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:38:27.366 16:03:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.366 16:03:41 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:27.366 16:03:41 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:38:27.367 16:03:41 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71376 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71376 ']' 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71376 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71376 00:38:27.367 killing process with pid 71376 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71376' 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71376 00:38:27.367 16:03:41 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71376 00:38:27.367 [2024-11-05 16:03:42.578463] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:27.367 [2024-11-05 16:03:42.578660] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:27.367 ************************************ 00:38:27.367 END TEST ublk_recovery 00:38:27.367 ************************************ 00:38:27.367 00:38:27.367 real 1m4.219s 00:38:27.367 user 1m43.446s 00:38:27.367 sys 0m34.794s 00:38:27.367 16:03:43 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:27.367 16:03:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:27.367 16:03:43 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@256 -- # timing_exit lib 00:38:27.367 16:03:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:27.367 16:03:43 -- common/autotest_common.sh@10 -- # set +x 00:38:27.367 16:03:43 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:38:27.367 16:03:43 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:38:27.367 16:03:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:27.367 16:03:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:27.367 16:03:43 -- common/autotest_common.sh@10 -- # set +x 00:38:27.367 ************************************ 00:38:27.367 START TEST ftl 00:38:27.367 ************************************ 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:38:27.367 * Looking for test storage... 00:38:27.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.367 16:03:43 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.367 16:03:43 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.367 16:03:43 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.367 16:03:43 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.367 16:03:43 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.367 16:03:43 ftl -- scripts/common.sh@344 -- # case "$op" in 00:38:27.367 16:03:43 ftl -- scripts/common.sh@345 -- # : 1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.367 16:03:43 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.367 16:03:43 ftl -- scripts/common.sh@365 -- # decimal 1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@353 -- # local d=1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.367 16:03:43 ftl -- scripts/common.sh@355 -- # echo 1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.367 16:03:43 ftl -- scripts/common.sh@366 -- # decimal 2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@353 -- # local d=2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.367 16:03:43 ftl -- scripts/common.sh@355 -- # echo 2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.367 16:03:43 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.367 16:03:43 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.367 16:03:43 ftl -- scripts/common.sh@368 -- # return 0 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.367 --rc genhtml_branch_coverage=1 00:38:27.367 --rc genhtml_function_coverage=1 00:38:27.367 --rc genhtml_legend=1 00:38:27.367 --rc geninfo_all_blocks=1 00:38:27.367 --rc geninfo_unexecuted_blocks=1 00:38:27.367 00:38:27.367 ' 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.367 --rc genhtml_branch_coverage=1 00:38:27.367 --rc genhtml_function_coverage=1 00:38:27.367 --rc genhtml_legend=1 00:38:27.367 --rc geninfo_all_blocks=1 00:38:27.367 --rc geninfo_unexecuted_blocks=1 00:38:27.367 00:38:27.367 ' 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.367 --rc genhtml_branch_coverage=1 00:38:27.367 --rc genhtml_function_coverage=1 00:38:27.367 --rc genhtml_legend=1 00:38:27.367 --rc geninfo_all_blocks=1 00:38:27.367 --rc geninfo_unexecuted_blocks=1 00:38:27.367 00:38:27.367 ' 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:27.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.367 --rc genhtml_branch_coverage=1 00:38:27.367 --rc genhtml_function_coverage=1 00:38:27.367 --rc genhtml_legend=1 00:38:27.367 --rc geninfo_all_blocks=1 00:38:27.367 --rc geninfo_unexecuted_blocks=1 00:38:27.367 00:38:27.367 ' 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:27.367 16:03:43 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:38:27.367 16:03:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.367 16:03:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.367 16:03:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:27.367 16:03:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:27.367 16:03:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:27.367 16:03:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:27.367 16:03:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:27.367 16:03:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.367 16:03:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.367 16:03:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:27.367 16:03:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:27.367 16:03:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:27.367 16:03:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:27.367 16:03:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:27.367 16:03:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:27.367 16:03:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.367 16:03:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.367 16:03:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:27.367 16:03:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:27.367 16:03:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:27.367 16:03:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:27.367 16:03:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:27.367 16:03:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:27.367 16:03:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:27.367 16:03:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:27.367 16:03:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.367 16:03:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:27.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:27.367 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:27.367 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:27.367 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:27.367 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72181 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:38:27.367 16:03:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72181 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@833 -- # '[' -z 72181 ']' 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:27.367 16:03:43 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.368 16:03:43 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:27.368 16:03:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:27.368 [2024-11-05 16:03:43.971005] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:38:27.368 [2024-11-05 16:03:43.971218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72181 ] 00:38:27.368 [2024-11-05 16:03:44.120868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.368 [2024-11-05 16:03:44.198698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.368 16:03:44 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:27.368 16:03:44 ftl -- common/autotest_common.sh@866 -- # return 0 00:38:27.368 16:03:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:38:27.368 16:03:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:38:27.368 16:03:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:38:27.368 16:03:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@50 -- # break 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@63 -- # break 00:38:27.368 16:03:46 ftl -- ftl/ftl.sh@66 -- # killprocess 72181 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@952 -- # '[' -z 72181 ']' 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@956 -- # kill -0 72181 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@957 -- # uname 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72181 00:38:27.368 killing process with pid 72181 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72181' 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@971 -- # kill 72181 00:38:27.368 16:03:46 ftl -- common/autotest_common.sh@976 -- # wait 72181 00:38:27.368 16:03:47 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:38:27.368 16:03:47 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:38:27.368 16:03:47 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:38:27.368 16:03:47 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:27.368 16:03:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:27.368 ************************************ 00:38:27.368 START TEST ftl_fio_basic 00:38:27.368 ************************************ 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:38:27.368 * Looking for test storage... 00:38:27.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:27.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.368 --rc genhtml_branch_coverage=1 00:38:27.368 --rc genhtml_function_coverage=1 00:38:27.368 --rc genhtml_legend=1 00:38:27.368 --rc geninfo_all_blocks=1 00:38:27.368 --rc geninfo_unexecuted_blocks=1 00:38:27.368 00:38:27.368 ' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:27.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.368 --rc genhtml_branch_coverage=1 00:38:27.368 --rc genhtml_function_coverage=1 00:38:27.368 --rc genhtml_legend=1 00:38:27.368 --rc geninfo_all_blocks=1 00:38:27.368 --rc geninfo_unexecuted_blocks=1 00:38:27.368 00:38:27.368 ' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:27.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.368 --rc genhtml_branch_coverage=1 00:38:27.368 --rc genhtml_function_coverage=1 00:38:27.368 --rc genhtml_legend=1 00:38:27.368 --rc geninfo_all_blocks=1 00:38:27.368 --rc geninfo_unexecuted_blocks=1 00:38:27.368 00:38:27.368 ' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:27.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.368 --rc genhtml_branch_coverage=1 00:38:27.368 --rc genhtml_function_coverage=1 00:38:27.368 --rc genhtml_legend=1 00:38:27.368 --rc geninfo_all_blocks=1 00:38:27.368 --rc geninfo_unexecuted_blocks=1 00:38:27.368 00:38:27.368 ' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:27.368 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72313 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72313 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72313 ']' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:27.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:27.369 16:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:27.369 [2024-11-05 16:03:47.879400] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:38:27.369 [2024-11-05 16:03:47.879651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72313 ] 00:38:27.369 [2024-11-05 16:03:48.036186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:27.369 [2024-11-05 16:03:48.114182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.369 [2024-11-05 16:03:48.114423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:27.369 [2024-11-05 16:03:48.114500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:38:27.369 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:38:27.628 16:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:38:27.887 { 00:38:27.887 "name": "nvme0n1", 00:38:27.887 "aliases": [ 00:38:27.887 "b4bcf9e9-cb7c-477f-9d7a-8c308d087fc6" 00:38:27.887 ], 00:38:27.887 "product_name": "NVMe disk", 00:38:27.887 "block_size": 4096, 00:38:27.887 "num_blocks": 1310720, 00:38:27.887 "uuid": "b4bcf9e9-cb7c-477f-9d7a-8c308d087fc6", 00:38:27.887 "numa_id": -1, 00:38:27.887 "assigned_rate_limits": { 00:38:27.887 "rw_ios_per_sec": 0, 00:38:27.887 "rw_mbytes_per_sec": 0, 00:38:27.887 "r_mbytes_per_sec": 0, 00:38:27.887 "w_mbytes_per_sec": 0 00:38:27.887 }, 00:38:27.887 "claimed": false, 00:38:27.887 "zoned": false, 00:38:27.887 "supported_io_types": { 00:38:27.887 "read": true, 00:38:27.887 "write": true, 00:38:27.887 "unmap": true, 00:38:27.887 "flush": true, 00:38:27.887 "reset": true, 00:38:27.887 "nvme_admin": true, 00:38:27.887 "nvme_io": true, 00:38:27.887 "nvme_io_md": false, 00:38:27.887 "write_zeroes": true, 00:38:27.887 "zcopy": false, 00:38:27.887 "get_zone_info": false, 00:38:27.887 "zone_management": false, 00:38:27.887 "zone_append": false, 00:38:27.887 "compare": true, 00:38:27.887 "compare_and_write": false, 00:38:27.887 "abort": true, 00:38:27.887 "seek_hole": false, 00:38:27.887 "seek_data": false, 00:38:27.887 "copy": true, 00:38:27.887 "nvme_iov_md": false 00:38:27.887 }, 00:38:27.887 "driver_specific": { 00:38:27.887 "nvme": [ 00:38:27.887 { 00:38:27.887 "pci_address": "0000:00:11.0", 00:38:27.887 "trid": { 00:38:27.887 "trtype": "PCIe", 00:38:27.887 "traddr": "0000:00:11.0" 00:38:27.887 }, 00:38:27.887 "ctrlr_data": { 00:38:27.887 "cntlid": 0, 00:38:27.887 "vendor_id": "0x1b36", 00:38:27.887 "model_number": "QEMU NVMe Ctrl", 00:38:27.887 "serial_number": "12341", 00:38:27.887 "firmware_revision": "8.0.0", 00:38:27.887 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:27.887 "oacs": { 00:38:27.887 "security": 0, 00:38:27.887 "format": 1, 00:38:27.887 "firmware": 0, 00:38:27.887 "ns_manage": 1 00:38:27.887 }, 00:38:27.887 "multi_ctrlr": false, 00:38:27.887 "ana_reporting": false 00:38:27.887 }, 00:38:27.887 "vs": { 00:38:27.887 "nvme_version": "1.4" 00:38:27.887 }, 00:38:27.887 "ns_data": { 00:38:27.887 "id": 1, 00:38:27.887 "can_share": false 00:38:27.887 } 00:38:27.887 } 00:38:27.887 ], 00:38:27.887 "mp_policy": "active_passive" 00:38:27.887 } 00:38:27.887 } 00:38:27.887 ]' 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:27.887 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:28.146 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:38:28.146 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:28.404 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=63970a8a-3873-4de6-8d81-0d60a73a9391 00:38:28.404 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 63970a8a-3873-4de6-8d81-0d60a73a9391 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b948a90d-33d9-4657-be1d-81df09145b64 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b948a90d-33d9-4657-be1d-81df09145b64 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b948a90d-33d9-4657-be1d-81df09145b64 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b948a90d-33d9-4657-be1d-81df09145b64 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b948a90d-33d9-4657-be1d-81df09145b64 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:38:28.662 16:03:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b948a90d-33d9-4657-be1d-81df09145b64 00:38:28.662 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:38:28.662 { 00:38:28.662 "name": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:28.662 "aliases": [ 00:38:28.662 "lvs/nvme0n1p0" 00:38:28.662 ], 00:38:28.662 "product_name": "Logical Volume", 00:38:28.662 "block_size": 4096, 00:38:28.662 "num_blocks": 26476544, 00:38:28.662 "uuid": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:28.662 "assigned_rate_limits": { 00:38:28.662 "rw_ios_per_sec": 0, 00:38:28.662 "rw_mbytes_per_sec": 0, 00:38:28.662 "r_mbytes_per_sec": 0, 00:38:28.662 "w_mbytes_per_sec": 0 00:38:28.662 }, 00:38:28.662 "claimed": false, 00:38:28.662 "zoned": false, 00:38:28.662 "supported_io_types": { 00:38:28.662 "read": true, 00:38:28.662 "write": true, 00:38:28.662 "unmap": true, 00:38:28.662 "flush": false, 00:38:28.662 "reset": true, 00:38:28.662 "nvme_admin": false, 00:38:28.662 "nvme_io": false, 00:38:28.662 "nvme_io_md": false, 00:38:28.662 "write_zeroes": true, 00:38:28.662 "zcopy": false, 00:38:28.662 "get_zone_info": false, 00:38:28.662 "zone_management": false, 00:38:28.662 "zone_append": false, 00:38:28.662 "compare": false, 00:38:28.662 "compare_and_write": false, 00:38:28.663 "abort": false, 00:38:28.663 "seek_hole": true, 00:38:28.663 "seek_data": true, 00:38:28.663 "copy": false, 00:38:28.663 "nvme_iov_md": false 00:38:28.663 }, 00:38:28.663 "driver_specific": { 00:38:28.663 "lvol": { 00:38:28.663 "lvol_store_uuid": "63970a8a-3873-4de6-8d81-0d60a73a9391", 00:38:28.663 "base_bdev": "nvme0n1", 00:38:28.663 "thin_provision": true, 00:38:28.663 "num_allocated_clusters": 0, 00:38:28.663 "snapshot": false, 00:38:28.663 "clone": false, 00:38:28.663 "esnap_clone": false 00:38:28.663 } 00:38:28.663 } 00:38:28.663 } 00:38:28.663 ]' 00:38:28.663 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:38:28.921 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b948a90d-33d9-4657-be1d-81df09145b64 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b948a90d-33d9-4657-be1d-81df09145b64 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b948a90d-33d9-4657-be1d-81df09145b64 00:38:29.179 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:38:29.179 { 00:38:29.179 "name": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:29.179 "aliases": [ 00:38:29.179 "lvs/nvme0n1p0" 00:38:29.179 ], 00:38:29.179 "product_name": "Logical Volume", 00:38:29.179 "block_size": 4096, 00:38:29.179 "num_blocks": 26476544, 00:38:29.179 "uuid": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:29.179 "assigned_rate_limits": { 00:38:29.179 "rw_ios_per_sec": 0, 00:38:29.179 "rw_mbytes_per_sec": 0, 00:38:29.179 "r_mbytes_per_sec": 0, 00:38:29.179 "w_mbytes_per_sec": 0 00:38:29.179 }, 00:38:29.179 "claimed": false, 00:38:29.179 "zoned": false, 00:38:29.179 "supported_io_types": { 00:38:29.179 "read": true, 00:38:29.179 "write": true, 00:38:29.179 "unmap": true, 00:38:29.179 "flush": false, 00:38:29.179 "reset": true, 00:38:29.179 "nvme_admin": false, 00:38:29.179 "nvme_io": false, 00:38:29.179 "nvme_io_md": false, 00:38:29.179 "write_zeroes": true, 00:38:29.179 "zcopy": false, 00:38:29.179 "get_zone_info": false, 00:38:29.179 "zone_management": false, 00:38:29.179 "zone_append": false, 00:38:29.179 "compare": false, 00:38:29.179 "compare_and_write": false, 00:38:29.179 "abort": false, 00:38:29.179 "seek_hole": true, 00:38:29.179 "seek_data": true, 00:38:29.179 "copy": false, 00:38:29.179 "nvme_iov_md": false 00:38:29.179 }, 00:38:29.179 "driver_specific": { 00:38:29.179 "lvol": { 00:38:29.179 "lvol_store_uuid": "63970a8a-3873-4de6-8d81-0d60a73a9391", 00:38:29.179 "base_bdev": "nvme0n1", 00:38:29.179 "thin_provision": true, 00:38:29.179 "num_allocated_clusters": 0, 00:38:29.179 "snapshot": false, 00:38:29.179 "clone": false, 00:38:29.179 "esnap_clone": false 00:38:29.179 } 00:38:29.179 } 00:38:29.179 } 00:38:29.179 ]' 00:38:29.180 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:38:29.438 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b948a90d-33d9-4657-be1d-81df09145b64 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b948a90d-33d9-4657-be1d-81df09145b64 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:38:29.438 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b948a90d-33d9-4657-be1d-81df09145b64 00:38:29.696 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:38:29.696 { 00:38:29.696 "name": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:29.696 "aliases": [ 00:38:29.696 "lvs/nvme0n1p0" 00:38:29.696 ], 00:38:29.696 "product_name": "Logical Volume", 00:38:29.696 "block_size": 4096, 00:38:29.696 "num_blocks": 26476544, 00:38:29.696 "uuid": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:29.696 "assigned_rate_limits": { 00:38:29.696 "rw_ios_per_sec": 0, 00:38:29.696 "rw_mbytes_per_sec": 0, 00:38:29.696 "r_mbytes_per_sec": 0, 00:38:29.696 "w_mbytes_per_sec": 0 00:38:29.696 }, 00:38:29.696 "claimed": false, 00:38:29.696 "zoned": false, 00:38:29.696 "supported_io_types": { 00:38:29.696 "read": true, 00:38:29.696 "write": true, 00:38:29.696 "unmap": true, 00:38:29.696 "flush": false, 00:38:29.696 "reset": true, 00:38:29.696 "nvme_admin": false, 00:38:29.696 "nvme_io": false, 00:38:29.696 "nvme_io_md": false, 00:38:29.696 "write_zeroes": true, 00:38:29.696 "zcopy": false, 00:38:29.696 "get_zone_info": false, 00:38:29.696 "zone_management": false, 00:38:29.696 "zone_append": false, 00:38:29.696 "compare": false, 00:38:29.696 "compare_and_write": false, 00:38:29.696 "abort": false, 00:38:29.696 "seek_hole": true, 00:38:29.696 "seek_data": true, 00:38:29.696 "copy": false, 00:38:29.696 "nvme_iov_md": false 00:38:29.696 }, 00:38:29.696 "driver_specific": { 00:38:29.696 "lvol": { 00:38:29.696 "lvol_store_uuid": "63970a8a-3873-4de6-8d81-0d60a73a9391", 00:38:29.696 "base_bdev": "nvme0n1", 00:38:29.696 "thin_provision": true, 00:38:29.696 "num_allocated_clusters": 0, 00:38:29.696 "snapshot": false, 00:38:29.696 "clone": false, 00:38:29.696 "esnap_clone": false 00:38:29.696 } 00:38:29.697 } 00:38:29.697 } 00:38:29.697 ]' 00:38:29.697 16:03:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:38:29.697 16:03:51 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b948a90d-33d9-4657-be1d-81df09145b64 -c nvc0n1p0 --l2p_dram_limit 60 00:38:29.956 [2024-11-05 16:03:51.222479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.222519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:29.956 [2024-11-05 16:03:51.222532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:29.956 [2024-11-05 16:03:51.222538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.222588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.222597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:29.956 [2024-11-05 16:03:51.222605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:38:29.956 [2024-11-05 16:03:51.222611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.222643] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:29.956 [2024-11-05 16:03:51.223212] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:29.956 [2024-11-05 16:03:51.223239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.223245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:29.956 [2024-11-05 16:03:51.223253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:38:29.956 [2024-11-05 16:03:51.223259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.223319] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 107424c3-6332-4498-b99b-b42ed70273c3 00:38:29.956 [2024-11-05 16:03:51.224258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.224364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:29.956 [2024-11-05 16:03:51.224377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:38:29.956 [2024-11-05 16:03:51.224384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.229093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.229121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:29.956 [2024-11-05 16:03:51.229129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.648 ms 00:38:29.956 [2024-11-05 16:03:51.229136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.229216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.229225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:29.956 [2024-11-05 16:03:51.229231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:38:29.956 [2024-11-05 16:03:51.229240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.229278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.229287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:29.956 [2024-11-05 16:03:51.229294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:29.956 [2024-11-05 16:03:51.229301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.229322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:29.956 [2024-11-05 16:03:51.232190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.232214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:29.956 [2024-11-05 16:03:51.232225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:38:29.956 [2024-11-05 16:03:51.232233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.232262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.232268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:29.956 [2024-11-05 16:03:51.232276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:29.956 [2024-11-05 16:03:51.232282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.232302] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:29.956 [2024-11-05 16:03:51.232415] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:29.956 [2024-11-05 16:03:51.232426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:29.956 [2024-11-05 16:03:51.232434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:29.956 [2024-11-05 16:03:51.232443] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:29.956 [2024-11-05 16:03:51.232450] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:29.956 [2024-11-05 16:03:51.232466] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:29.956 [2024-11-05 16:03:51.232472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:29.956 [2024-11-05 16:03:51.232479] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:29.956 [2024-11-05 16:03:51.232484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:29.956 [2024-11-05 16:03:51.232492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.232500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:29.956 [2024-11-05 16:03:51.232507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:38:29.956 [2024-11-05 16:03:51.232513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.232587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.956 [2024-11-05 16:03:51.232593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:29.956 [2024-11-05 16:03:51.232600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:38:29.956 [2024-11-05 16:03:51.232606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.956 [2024-11-05 16:03:51.232697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:29.956 [2024-11-05 16:03:51.232709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:29.956 [2024-11-05 16:03:51.232718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:29.956 [2024-11-05 16:03:51.232724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:29.956 [2024-11-05 16:03:51.232731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:29.956 [2024-11-05 16:03:51.232745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:29.956 [2024-11-05 16:03:51.232752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:29.956 [2024-11-05 16:03:51.232757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:29.956 [2024-11-05 16:03:51.232763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:29.956 [2024-11-05 16:03:51.232769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:29.957 [2024-11-05 16:03:51.232775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:29.957 [2024-11-05 16:03:51.232780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:29.957 [2024-11-05 16:03:51.232786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:29.957 [2024-11-05 16:03:51.232791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:29.957 [2024-11-05 16:03:51.232798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:29.957 [2024-11-05 16:03:51.232803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:29.957 [2024-11-05 16:03:51.232817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:29.957 [2024-11-05 16:03:51.232825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:29.957 [2024-11-05 16:03:51.232837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:29.957 [2024-11-05 16:03:51.232848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:29.957 [2024-11-05 16:03:51.232853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:29.957 [2024-11-05 16:03:51.232865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:29.957 [2024-11-05 16:03:51.232871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:29.957 [2024-11-05 16:03:51.232882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:29.957 [2024-11-05 16:03:51.232888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:29.957 [2024-11-05 16:03:51.232899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:29.957 [2024-11-05 16:03:51.232906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:29.957 [2024-11-05 16:03:51.232917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:29.957 [2024-11-05 16:03:51.232931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:29.957 [2024-11-05 16:03:51.232937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:29.957 [2024-11-05 16:03:51.232942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:29.957 [2024-11-05 16:03:51.232948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:29.957 [2024-11-05 16:03:51.232953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:29.957 [2024-11-05 16:03:51.232966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:29.957 [2024-11-05 16:03:51.232972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:29.957 [2024-11-05 16:03:51.232976] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:29.957 [2024-11-05 16:03:51.232984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:29.957 [2024-11-05 16:03:51.232989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:29.957 [2024-11-05 16:03:51.232996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:29.957 [2024-11-05 16:03:51.233001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:29.957 [2024-11-05 16:03:51.233009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:29.957 [2024-11-05 16:03:51.233014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:29.957 [2024-11-05 16:03:51.233023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:29.957 [2024-11-05 16:03:51.233028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:29.957 [2024-11-05 16:03:51.233034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:29.957 [2024-11-05 16:03:51.233042] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:29.957 [2024-11-05 16:03:51.233049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:29.957 [2024-11-05 16:03:51.233063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:29.957 [2024-11-05 16:03:51.233068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:29.957 [2024-11-05 16:03:51.233075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:29.957 [2024-11-05 16:03:51.233080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:29.957 [2024-11-05 16:03:51.233087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:29.957 [2024-11-05 16:03:51.233092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:29.957 [2024-11-05 16:03:51.233098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:29.957 [2024-11-05 16:03:51.233104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:29.957 [2024-11-05 16:03:51.233112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:29.957 [2024-11-05 16:03:51.233142] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:29.957 [2024-11-05 16:03:51.233149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:29.957 [2024-11-05 16:03:51.233164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:29.957 [2024-11-05 16:03:51.233169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:29.957 [2024-11-05 16:03:51.233176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:29.957 [2024-11-05 16:03:51.233181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:29.957 [2024-11-05 16:03:51.233188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:29.957 [2024-11-05 16:03:51.233194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:38:29.957 [2024-11-05 16:03:51.233201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:29.957 [2024-11-05 16:03:51.233261] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:29.957 [2024-11-05 16:03:51.233276] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:33.238 [2024-11-05 16:03:54.127451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.238 [2024-11-05 16:03:54.127508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:33.238 [2024-11-05 16:03:54.127526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2894.178 ms 00:38:33.238 [2024-11-05 16:03:54.127536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.238 [2024-11-05 16:03:54.154371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.238 [2024-11-05 16:03:54.154424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:33.238 [2024-11-05 16:03:54.154443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.613 ms 00:38:33.238 [2024-11-05 16:03:54.154458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.238 [2024-11-05 16:03:54.154619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.238 [2024-11-05 16:03:54.154641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:33.238 [2024-11-05 16:03:54.154655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:38:33.238 [2024-11-05 16:03:54.154672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.193570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.193611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:33.239 [2024-11-05 16:03:54.193626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.834 ms 00:38:33.239 [2024-11-05 16:03:54.193636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.193678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.193688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:33.239 [2024-11-05 16:03:54.193697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:33.239 [2024-11-05 16:03:54.193705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.194053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.194073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:33.239 [2024-11-05 16:03:54.194082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:38:33.239 [2024-11-05 16:03:54.194094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.194240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.194351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:33.239 [2024-11-05 16:03:54.194371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:38:33.239 [2024-11-05 16:03:54.194387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.211180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.211213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:33.239 [2024-11-05 16:03:54.211223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.756 ms 00:38:33.239 [2024-11-05 16:03:54.211232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.222583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:33.239 [2024-11-05 16:03:54.236504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.236550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:33.239 [2024-11-05 16:03:54.236563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.170 ms 00:38:33.239 [2024-11-05 16:03:54.236572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.287422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.287462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:33.239 [2024-11-05 16:03:54.287478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.813 ms 00:38:33.239 [2024-11-05 16:03:54.287487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.287666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.287676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:33.239 [2024-11-05 16:03:54.287689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:38:33.239 [2024-11-05 16:03:54.287697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.310984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.311021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:33.239 [2024-11-05 16:03:54.311034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.219 ms 00:38:33.239 [2024-11-05 16:03:54.311042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.333605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.333637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:33.239 [2024-11-05 16:03:54.333651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.520 ms 00:38:33.239 [2024-11-05 16:03:54.333658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.334352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.334381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:33.239 [2024-11-05 16:03:54.334393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:38:33.239 [2024-11-05 16:03:54.334400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.399839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.400028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:33.239 [2024-11-05 16:03:54.400059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.396 ms 00:38:33.239 [2024-11-05 16:03:54.400076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.424020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.424053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:33.239 [2024-11-05 16:03:54.424067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.834 ms 00:38:33.239 [2024-11-05 16:03:54.424075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.447369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.447497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:33.239 [2024-11-05 16:03:54.447523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.252 ms 00:38:33.239 [2024-11-05 16:03:54.447534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.470745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.470783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:33.239 [2024-11-05 16:03:54.470797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.155 ms 00:38:33.239 [2024-11-05 16:03:54.470804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.470845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.470854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:33.239 [2024-11-05 16:03:54.470867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:33.239 [2024-11-05 16:03:54.470876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.470961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.239 [2024-11-05 16:03:54.470970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:33.239 [2024-11-05 16:03:54.470980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:38:33.239 [2024-11-05 16:03:54.470987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.239 [2024-11-05 16:03:54.471863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3248.948 ms, result 0 00:38:33.239 { 00:38:33.239 "name": "ftl0", 00:38:33.239 "uuid": "107424c3-6332-4498-b99b-b42ed70273c3" 00:38:33.239 } 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:38:33.239 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:33.498 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:38:33.755 [ 00:38:33.755 { 00:38:33.755 "name": "ftl0", 00:38:33.755 "aliases": [ 00:38:33.755 "107424c3-6332-4498-b99b-b42ed70273c3" 00:38:33.755 ], 00:38:33.755 "product_name": "FTL disk", 00:38:33.755 "block_size": 4096, 00:38:33.755 "num_blocks": 20971520, 00:38:33.755 "uuid": "107424c3-6332-4498-b99b-b42ed70273c3", 00:38:33.755 "assigned_rate_limits": { 00:38:33.755 "rw_ios_per_sec": 0, 00:38:33.755 "rw_mbytes_per_sec": 0, 00:38:33.755 "r_mbytes_per_sec": 0, 00:38:33.755 "w_mbytes_per_sec": 0 00:38:33.755 }, 00:38:33.755 "claimed": false, 00:38:33.756 "zoned": false, 00:38:33.756 "supported_io_types": { 00:38:33.756 "read": true, 00:38:33.756 "write": true, 00:38:33.756 "unmap": true, 00:38:33.756 "flush": true, 00:38:33.756 "reset": false, 00:38:33.756 "nvme_admin": false, 00:38:33.756 "nvme_io": false, 00:38:33.756 "nvme_io_md": false, 00:38:33.756 "write_zeroes": true, 00:38:33.756 "zcopy": false, 00:38:33.756 "get_zone_info": false, 00:38:33.756 "zone_management": false, 00:38:33.756 "zone_append": false, 00:38:33.756 "compare": false, 00:38:33.756 "compare_and_write": false, 00:38:33.756 "abort": false, 00:38:33.756 "seek_hole": false, 00:38:33.756 "seek_data": false, 00:38:33.756 "copy": false, 00:38:33.756 "nvme_iov_md": false 00:38:33.756 }, 00:38:33.756 "driver_specific": { 00:38:33.756 "ftl": { 00:38:33.756 "base_bdev": "b948a90d-33d9-4657-be1d-81df09145b64", 00:38:33.756 "cache": "nvc0n1p0" 00:38:33.756 } 00:38:33.756 } 00:38:33.756 } 00:38:33.756 ] 00:38:33.756 16:03:54 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:38:33.756 16:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:38:33.756 16:03:54 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:38:33.756 16:03:55 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:38:33.756 16:03:55 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:34.015 [2024-11-05 16:03:55.260597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.015 [2024-11-05 16:03:55.260745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:34.015 [2024-11-05 16:03:55.260798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:34.015 [2024-11-05 16:03:55.260836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.015 [2024-11-05 16:03:55.260879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:34.015 [2024-11-05 16:03:55.263066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.015 [2024-11-05 16:03:55.263151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:34.015 [2024-11-05 16:03:55.263196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:38:34.015 [2024-11-05 16:03:55.263213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.015 [2024-11-05 16:03:55.263562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.015 [2024-11-05 16:03:55.263622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:34.015 [2024-11-05 16:03:55.263660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:38:34.015 [2024-11-05 16:03:55.263677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.015 [2024-11-05 16:03:55.266136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.015 [2024-11-05 16:03:55.266192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:34.015 [2024-11-05 16:03:55.266241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.426 ms 00:38:34.015 [2024-11-05 16:03:55.266258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.015 [2024-11-05 16:03:55.270973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.271054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:34.016 [2024-11-05 16:03:55.271097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:38:34.016 [2024-11-05 16:03:55.271115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.289338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.289427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:34.016 [2024-11-05 16:03:55.289469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.157 ms 00:38:34.016 [2024-11-05 16:03:55.289486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.301425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.301451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:34.016 [2024-11-05 16:03:55.301462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.834 ms 00:38:34.016 [2024-11-05 16:03:55.301470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.301612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.301620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:34.016 [2024-11-05 16:03:55.301628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:38:34.016 [2024-11-05 16:03:55.301633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.318920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.318944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:34.016 [2024-11-05 16:03:55.318954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.264 ms 00:38:34.016 [2024-11-05 16:03:55.318959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.336187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.336287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:34.016 [2024-11-05 16:03:55.336301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 00:38:34.016 [2024-11-05 16:03:55.336306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.353347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.353372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:34.016 [2024-11-05 16:03:55.353381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.003 ms 00:38:34.016 [2024-11-05 16:03:55.353387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.370286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.016 [2024-11-05 16:03:55.370322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:34.016 [2024-11-05 16:03:55.370331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.827 ms 00:38:34.016 [2024-11-05 16:03:55.370337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.016 [2024-11-05 16:03:55.370367] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:34.016 [2024-11-05 16:03:55.370378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:34.016 [2024-11-05 16:03:55.370802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.370994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:34.017 [2024-11-05 16:03:55.371210] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:34.017 [2024-11-05 16:03:55.371218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 107424c3-6332-4498-b99b-b42ed70273c3 00:38:34.017 [2024-11-05 16:03:55.371224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:34.017 [2024-11-05 16:03:55.371232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:34.017 [2024-11-05 16:03:55.371237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:34.017 [2024-11-05 16:03:55.371246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:34.017 [2024-11-05 16:03:55.371251] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:34.017 [2024-11-05 16:03:55.371258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:34.017 [2024-11-05 16:03:55.371264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:34.017 [2024-11-05 16:03:55.371270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:34.017 [2024-11-05 16:03:55.371275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:34.017 [2024-11-05 16:03:55.371282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.017 [2024-11-05 16:03:55.371287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:34.017 [2024-11-05 16:03:55.371295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:38:34.017 [2024-11-05 16:03:55.371300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.380824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.276 [2024-11-05 16:03:55.380849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:34.276 [2024-11-05 16:03:55.380858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.495 ms 00:38:34.276 [2024-11-05 16:03:55.380864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.381137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.276 [2024-11-05 16:03:55.381144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:34.276 [2024-11-05 16:03:55.381151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:38:34.276 [2024-11-05 16:03:55.381157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.415571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.415600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:34.276 [2024-11-05 16:03:55.415610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.415617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.415670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.415677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:34.276 [2024-11-05 16:03:55.415684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.415690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.415763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.415771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:34.276 [2024-11-05 16:03:55.415780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.415786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.415806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.415812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:34.276 [2024-11-05 16:03:55.415819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.415825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.477840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.477978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:34.276 [2024-11-05 16:03:55.477993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.477999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.525783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.525818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:34.276 [2024-11-05 16:03:55.525828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.525834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.525908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.276 [2024-11-05 16:03:55.525916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:34.276 [2024-11-05 16:03:55.525924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.276 [2024-11-05 16:03:55.525932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.276 [2024-11-05 16:03:55.525979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.277 [2024-11-05 16:03:55.525986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:34.277 [2024-11-05 16:03:55.525993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.277 [2024-11-05 16:03:55.525999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.277 [2024-11-05 16:03:55.526078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.277 [2024-11-05 16:03:55.526086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:34.277 [2024-11-05 16:03:55.526093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.277 [2024-11-05 16:03:55.526099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.277 [2024-11-05 16:03:55.526142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.277 [2024-11-05 16:03:55.526148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:34.277 [2024-11-05 16:03:55.526155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.277 [2024-11-05 16:03:55.526161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.277 [2024-11-05 16:03:55.526198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.277 [2024-11-05 16:03:55.526204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:34.277 [2024-11-05 16:03:55.526211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.277 [2024-11-05 16:03:55.526217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.277 [2024-11-05 16:03:55.526258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:34.277 [2024-11-05 16:03:55.526265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:34.277 [2024-11-05 16:03:55.526273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:34.277 [2024-11-05 16:03:55.526278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.277 [2024-11-05 16:03:55.526416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.800 ms, result 0 00:38:34.277 true 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72313 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72313 ']' 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72313 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72313 00:38:34.277 killing process with pid 72313 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72313' 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72313 00:38:34.277 16:03:55 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72313 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:40.834 16:04:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:38:40.834 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:38:40.834 fio-3.35 00:38:40.834 Starting 1 thread 00:38:45.126 00:38:45.126 test: (groupid=0, jobs=1): err= 0: pid=72496: Tue Nov 5 16:04:05 2024 00:38:45.126 read: IOPS=1094, BW=72.7MiB/s (76.2MB/s)(255MiB/3503msec) 00:38:45.126 slat (nsec): min=2933, max=93514, avg=4409.12, stdev=2390.47 00:38:45.126 clat (usec): min=244, max=4348, avg=413.76, stdev=173.45 00:38:45.126 lat (usec): min=248, max=4358, avg=418.17, stdev=173.85 00:38:45.126 clat percentiles (usec): 00:38:45.126 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 306], 00:38:45.126 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 392], 00:38:45.126 | 70.00th=[ 461], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 791], 00:38:45.126 | 99.00th=[ 938], 99.50th=[ 1029], 99.90th=[ 1745], 99.95th=[ 2737], 00:38:45.126 | 99.99th=[ 4359] 00:38:45.126 write: IOPS=1101, BW=73.2MiB/s (76.7MB/s)(256MiB/3500msec); 0 zone resets 00:38:45.126 slat (usec): min=13, max=106, avg=18.67, stdev= 3.44 00:38:45.126 clat (usec): min=288, max=4554, avg=461.16, stdev=203.36 00:38:45.126 lat (usec): min=308, max=4578, avg=479.83, stdev=203.58 00:38:45.126 clat percentiles (usec): 00:38:45.126 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 334], 00:38:45.126 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 433], 00:38:45.126 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 644], 95.00th=[ 881], 00:38:45.126 | 99.00th=[ 1057], 99.50th=[ 1188], 99.90th=[ 1434], 99.95th=[ 4146], 00:38:45.126 | 99.99th=[ 4555] 00:38:45.126 bw ( KiB/s): min=59024, max=93160, per=100.00%, avg=74916.57, stdev=15568.57, samples=7 00:38:45.126 iops : min= 868, max= 1370, avg=1101.71, stdev=228.95, samples=7 00:38:45.126 lat (usec) : 250=0.03%, 500=71.28%, 750=21.93%, 1000=5.80% 00:38:45.126 lat (msec) : 2=0.90%, 4=0.01%, 10=0.05% 00:38:45.126 cpu : usr=99.37%, sys=0.03%, ctx=7, majf=0, minf=1169 00:38:45.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:45.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:45.126 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:45.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:45.126 00:38:45.126 Run status group 0 (all jobs): 00:38:45.126 READ: bw=72.7MiB/s (76.2MB/s), 72.7MiB/s-72.7MiB/s (76.2MB/s-76.2MB/s), io=255MiB (267MB), run=3503-3503msec 00:38:45.126 WRITE: bw=73.2MiB/s (76.7MB/s), 73.2MiB/s-73.2MiB/s (76.7MB/s-76.7MB/s), io=256MiB (269MB), run=3500-3500msec 00:38:46.067 ----------------------------------------------------- 00:38:46.067 Suppressions used: 00:38:46.067 count bytes template 00:38:46.067 1 5 /usr/src/fio/parse.c 00:38:46.067 1 8 libtcmalloc_minimal.so 00:38:46.067 1 904 libcrypto.so 00:38:46.067 ----------------------------------------------------- 00:38:46.067 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:46.067 16:04:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:38:46.067 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:38:46.067 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:38:46.067 fio-3.35 00:38:46.067 Starting 2 threads 00:39:12.608 00:39:12.608 first_half: (groupid=0, jobs=1): err= 0: pid=72588: Tue Nov 5 16:04:30 2024 00:39:12.608 read: IOPS=2952, BW=11.5MiB/s (12.1MB/s)(255MiB/22096msec) 00:39:12.608 slat (nsec): min=2980, max=19036, avg=3715.17, stdev=644.38 00:39:12.608 clat (usec): min=576, max=255058, avg=32407.38, stdev=17007.85 00:39:12.608 lat (usec): min=579, max=255063, avg=32411.09, stdev=17007.89 00:39:12.608 clat percentiles (msec): 00:39:12.608 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 30], 00:39:12.608 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:39:12.608 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 41], 00:39:12.608 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 194], 99.95th=[ 218], 00:39:12.608 | 99.99th=[ 247] 00:39:12.608 write: IOPS=3479, BW=13.6MiB/s (14.3MB/s)(256MiB/18833msec); 0 zone resets 00:39:12.608 slat (usec): min=3, max=353, avg= 5.27, stdev= 2.73 00:39:12.608 clat (usec): min=361, max=88792, avg=10851.53, stdev=18201.01 00:39:12.608 lat (usec): min=366, max=88797, avg=10856.81, stdev=18201.03 00:39:12.608 clat percentiles (usec): 00:39:12.608 | 1.00th=[ 676], 5.00th=[ 840], 10.00th=[ 1004], 20.00th=[ 1352], 00:39:12.608 | 30.00th=[ 2671], 40.00th=[ 4080], 50.00th=[ 4817], 60.00th=[ 5276], 00:39:12.608 | 70.00th=[ 6063], 80.00th=[10552], 90.00th=[32375], 95.00th=[64226], 00:39:12.608 | 99.00th=[72877], 99.50th=[77071], 99.90th=[81265], 99.95th=[82314], 00:39:12.608 | 99.99th=[86508] 00:39:12.608 bw ( KiB/s): min= 912, max=42536, per=78.46%, avg=21843.46, stdev=11992.51, samples=24 00:39:12.608 iops : min= 228, max=10634, avg=5460.83, stdev=2998.13, samples=24 00:39:12.608 lat (usec) : 500=0.02%, 750=1.40%, 1000=3.64% 00:39:12.608 lat (msec) : 2=7.67%, 4=7.29%, 10=21.56%, 20=5.10%, 50=46.84% 00:39:12.608 lat (msec) : 100=5.57%, 250=0.91%, 500=0.01% 00:39:12.608 cpu : usr=99.28%, sys=0.10%, ctx=29, majf=0, minf=5617 00:39:12.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:12.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.608 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:12.608 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:12.608 second_half: (groupid=0, jobs=1): err= 0: pid=72589: Tue Nov 5 16:04:30 2024 00:39:12.608 read: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(254MiB/21943msec) 00:39:12.608 slat (nsec): min=2950, max=16897, avg=3676.29, stdev=552.78 00:39:12.608 clat (usec): min=632, max=258475, avg=33168.12, stdev=15165.20 00:39:12.608 lat (usec): min=636, max=258479, avg=33171.79, stdev=15165.23 00:39:12.608 clat percentiles (msec): 00:39:12.608 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:39:12.608 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:39:12.608 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 47], 00:39:12.608 | 99.00th=[ 113], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 163], 00:39:12.608 | 99.99th=[ 253] 00:39:12.608 write: IOPS=4624, BW=18.1MiB/s (18.9MB/s)(256MiB/14172msec); 0 zone resets 00:39:12.608 slat (usec): min=3, max=127, avg= 5.19, stdev= 2.09 00:39:12.608 clat (usec): min=381, max=89003, avg=9873.60, stdev=17740.81 00:39:12.608 lat (usec): min=387, max=89007, avg=9878.79, stdev=17740.81 00:39:12.608 clat percentiles (usec): 00:39:12.608 | 1.00th=[ 676], 5.00th=[ 807], 10.00th=[ 963], 20.00th=[ 1172], 00:39:12.608 | 30.00th=[ 1582], 40.00th=[ 2966], 50.00th=[ 4047], 60.00th=[ 4948], 00:39:12.608 | 70.00th=[ 6063], 80.00th=[10028], 90.00th=[13960], 95.00th=[63701], 00:39:12.608 | 99.00th=[72877], 99.50th=[74974], 99.90th=[81265], 99.95th=[82314], 00:39:12.608 | 99.99th=[87557] 00:39:12.609 bw ( KiB/s): min= 416, max=43576, per=100.00%, avg=29127.11, stdev=12758.98, samples=18 00:39:12.609 iops : min= 104, max=10894, avg=7281.78, stdev=3189.75, samples=18 00:39:12.609 lat (usec) : 500=0.01%, 750=1.63%, 1000=4.28% 00:39:12.609 lat (msec) : 2=10.87%, 4=8.60%, 10=15.35%, 20=5.88%, 50=46.71% 00:39:12.609 lat (msec) : 100=5.93%, 250=0.74%, 500=0.01% 00:39:12.609 cpu : usr=99.47%, sys=0.08%, ctx=41, majf=0, minf=5504 00:39:12.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:39:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:12.609 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:12.609 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:12.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:12.609 00:39:12.609 Run status group 0 (all jobs): 00:39:12.609 READ: bw=23.0MiB/s (24.2MB/s), 11.5MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=509MiB (534MB), run=21943-22096msec 00:39:12.609 WRITE: bw=27.2MiB/s (28.5MB/s), 13.6MiB/s-18.1MiB/s (14.3MB/s-18.9MB/s), io=512MiB (537MB), run=14172-18833msec 00:39:12.609 ----------------------------------------------------- 00:39:12.609 Suppressions used: 00:39:12.609 count bytes template 00:39:12.609 2 10 /usr/src/fio/parse.c 00:39:12.609 2 192 /usr/src/fio/iolog.c 00:39:12.609 1 8 libtcmalloc_minimal.so 00:39:12.609 1 904 libcrypto.so 00:39:12.609 ----------------------------------------------------- 00:39:12.609 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:12.609 16:04:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:39:12.609 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:39:12.609 fio-3.35 00:39:12.609 Starting 1 thread 00:39:24.810 00:39:24.810 test: (groupid=0, jobs=1): err= 0: pid=72885: Tue Nov 5 16:04:45 2024 00:39:24.810 read: IOPS=8299, BW=32.4MiB/s (34.0MB/s)(255MiB/7856msec) 00:39:24.811 slat (nsec): min=2975, max=16460, avg=3511.37, stdev=656.64 00:39:24.811 clat (usec): min=468, max=29998, avg=15414.18, stdev=1688.35 00:39:24.811 lat (usec): min=472, max=30001, avg=15417.69, stdev=1688.38 00:39:24.811 clat percentiles (usec): 00:39:24.811 | 1.00th=[14222], 5.00th=[14484], 10.00th=[14484], 20.00th=[14615], 00:39:24.811 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:39:24.811 | 70.00th=[15270], 80.00th=[15401], 90.00th=[16057], 95.00th=[18744], 00:39:24.811 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25560], 99.95th=[26346], 00:39:24.811 | 99.99th=[29230] 00:39:24.811 write: IOPS=16.2k, BW=63.3MiB/s (66.4MB/s)(256MiB/4045msec); 0 zone resets 00:39:24.811 slat (usec): min=3, max=103, avg= 5.26, stdev= 2.03 00:39:24.811 clat (usec): min=495, max=44242, avg=7859.89, stdev=9679.66 00:39:24.811 lat (usec): min=501, max=44247, avg=7865.15, stdev=9679.66 00:39:24.811 clat percentiles (usec): 00:39:24.811 | 1.00th=[ 627], 5.00th=[ 725], 10.00th=[ 824], 20.00th=[ 963], 00:39:24.811 | 30.00th=[ 1090], 40.00th=[ 1467], 50.00th=[ 5407], 60.00th=[ 6194], 00:39:24.811 | 70.00th=[ 7177], 80.00th=[ 8586], 90.00th=[28181], 95.00th=[29754], 00:39:24.811 | 99.00th=[33424], 99.50th=[35914], 99.90th=[38011], 99.95th=[38536], 00:39:24.811 | 99.99th=[42730] 00:39:24.811 bw ( KiB/s): min= 4096, max=86032, per=89.89%, avg=58254.22, stdev=22809.10, samples=9 00:39:24.811 iops : min= 1024, max=21508, avg=14563.56, stdev=5702.27, samples=9 00:39:24.811 lat (usec) : 500=0.01%, 750=3.19%, 1000=8.34% 00:39:24.811 lat (msec) : 2=9.08%, 4=0.60%, 10=20.61%, 20=48.36%, 50=9.83% 00:39:24.811 cpu : usr=99.17%, sys=0.13%, ctx=17, majf=0, minf=5565 00:39:24.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:24.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:24.811 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:24.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:24.811 00:39:24.811 Run status group 0 (all jobs): 00:39:24.811 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=255MiB (267MB), run=7856-7856msec 00:39:24.811 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=256MiB (268MB), run=4045-4045msec 00:39:26.720 ----------------------------------------------------- 00:39:26.720 Suppressions used: 00:39:26.720 count bytes template 00:39:26.720 1 5 /usr/src/fio/parse.c 00:39:26.720 2 192 /usr/src/fio/iolog.c 00:39:26.720 1 8 libtcmalloc_minimal.so 00:39:26.720 1 904 libcrypto.so 00:39:26.720 ----------------------------------------------------- 00:39:26.720 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:26.720 Remove shared memory files 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57119 /dev/shm/spdk_tgt_trace.pid71230 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:39:26.720 ************************************ 00:39:26.720 END TEST ftl_fio_basic 00:39:26.720 ************************************ 00:39:26.720 00:39:26.720 real 1m0.106s 00:39:26.720 user 2m2.733s 00:39:26.720 sys 0m11.194s 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:26.720 16:04:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:26.720 16:04:47 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:39:26.720 16:04:47 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:26.720 16:04:47 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:26.720 16:04:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:26.720 ************************************ 00:39:26.720 START TEST ftl_bdevperf 00:39:26.720 ************************************ 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:39:26.720 * Looking for test storage... 00:39:26.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.720 --rc genhtml_branch_coverage=1 00:39:26.720 --rc genhtml_function_coverage=1 00:39:26.720 --rc genhtml_legend=1 00:39:26.720 --rc geninfo_all_blocks=1 00:39:26.720 --rc geninfo_unexecuted_blocks=1 00:39:26.720 00:39:26.720 ' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.720 --rc genhtml_branch_coverage=1 00:39:26.720 --rc genhtml_function_coverage=1 00:39:26.720 --rc genhtml_legend=1 00:39:26.720 --rc geninfo_all_blocks=1 00:39:26.720 --rc geninfo_unexecuted_blocks=1 00:39:26.720 00:39:26.720 ' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.720 --rc genhtml_branch_coverage=1 00:39:26.720 --rc genhtml_function_coverage=1 00:39:26.720 --rc genhtml_legend=1 00:39:26.720 --rc geninfo_all_blocks=1 00:39:26.720 --rc geninfo_unexecuted_blocks=1 00:39:26.720 00:39:26.720 ' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:26.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.720 --rc genhtml_branch_coverage=1 00:39:26.720 --rc genhtml_function_coverage=1 00:39:26.720 --rc genhtml_legend=1 00:39:26.720 --rc geninfo_all_blocks=1 00:39:26.720 --rc geninfo_unexecuted_blocks=1 00:39:26.720 00:39:26.720 ' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:26.720 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73116 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73116 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73116 ']' 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:26.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:26.721 16:04:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:26.721 [2024-11-05 16:04:48.031274] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:39:26.721 [2024-11-05 16:04:48.031540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73116 ] 00:39:26.981 [2024-11-05 16:04:48.192591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.981 [2024-11-05 16:04:48.292445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.553 16:04:48 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:27.553 16:04:48 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:39:27.553 16:04:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:39:27.553 16:04:48 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:39:27.553 16:04:48 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:27.554 16:04:48 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:39:27.554 16:04:48 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:39:27.554 16:04:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:28.126 { 00:39:28.126 "name": "nvme0n1", 00:39:28.126 "aliases": [ 00:39:28.126 "675915bb-90cb-4f11-8b95-e877b223ba35" 00:39:28.126 ], 00:39:28.126 "product_name": "NVMe disk", 00:39:28.126 "block_size": 4096, 00:39:28.126 "num_blocks": 1310720, 00:39:28.126 "uuid": "675915bb-90cb-4f11-8b95-e877b223ba35", 00:39:28.126 "numa_id": -1, 00:39:28.126 "assigned_rate_limits": { 00:39:28.126 "rw_ios_per_sec": 0, 00:39:28.126 "rw_mbytes_per_sec": 0, 00:39:28.126 "r_mbytes_per_sec": 0, 00:39:28.126 "w_mbytes_per_sec": 0 00:39:28.126 }, 00:39:28.126 "claimed": true, 00:39:28.126 "claim_type": "read_many_write_one", 00:39:28.126 "zoned": false, 00:39:28.126 "supported_io_types": { 00:39:28.126 "read": true, 00:39:28.126 "write": true, 00:39:28.126 "unmap": true, 00:39:28.126 "flush": true, 00:39:28.126 "reset": true, 00:39:28.126 "nvme_admin": true, 00:39:28.126 "nvme_io": true, 00:39:28.126 "nvme_io_md": false, 00:39:28.126 "write_zeroes": true, 00:39:28.126 "zcopy": false, 00:39:28.126 "get_zone_info": false, 00:39:28.126 "zone_management": false, 00:39:28.126 "zone_append": false, 00:39:28.126 "compare": true, 00:39:28.126 "compare_and_write": false, 00:39:28.126 "abort": true, 00:39:28.126 "seek_hole": false, 00:39:28.126 "seek_data": false, 00:39:28.126 "copy": true, 00:39:28.126 "nvme_iov_md": false 00:39:28.126 }, 00:39:28.126 "driver_specific": { 00:39:28.126 "nvme": [ 00:39:28.126 { 00:39:28.126 "pci_address": "0000:00:11.0", 00:39:28.126 "trid": { 00:39:28.126 "trtype": "PCIe", 00:39:28.126 "traddr": "0000:00:11.0" 00:39:28.126 }, 00:39:28.126 "ctrlr_data": { 00:39:28.126 "cntlid": 0, 00:39:28.126 "vendor_id": "0x1b36", 00:39:28.126 "model_number": "QEMU NVMe Ctrl", 00:39:28.126 "serial_number": "12341", 00:39:28.126 "firmware_revision": "8.0.0", 00:39:28.126 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:28.126 "oacs": { 00:39:28.126 "security": 0, 00:39:28.126 "format": 1, 00:39:28.126 "firmware": 0, 00:39:28.126 "ns_manage": 1 00:39:28.126 }, 00:39:28.126 "multi_ctrlr": false, 00:39:28.126 "ana_reporting": false 00:39:28.126 }, 00:39:28.126 "vs": { 00:39:28.126 "nvme_version": "1.4" 00:39:28.126 }, 00:39:28.126 "ns_data": { 00:39:28.126 "id": 1, 00:39:28.126 "can_share": false 00:39:28.126 } 00:39:28.126 } 00:39:28.126 ], 00:39:28.126 "mp_policy": "active_passive" 00:39:28.126 } 00:39:28.126 } 00:39:28.126 ]' 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:28.126 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:28.402 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=63970a8a-3873-4de6-8d81-0d60a73a9391 00:39:28.402 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:39:28.402 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63970a8a-3873-4de6-8d81-0d60a73a9391 00:39:28.662 16:04:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:39:28.923 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b1b7edb3-4d64-403d-bdd0-3bf42be2b524 00:39:28.923 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b1b7edb3-4d64-403d-bdd0-3bf42be2b524 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:39:29.184 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:29.443 { 00:39:29.443 "name": "376d9fb5-37ba-413f-8037-5b834b76fa46", 00:39:29.443 "aliases": [ 00:39:29.443 "lvs/nvme0n1p0" 00:39:29.443 ], 00:39:29.443 "product_name": "Logical Volume", 00:39:29.443 "block_size": 4096, 00:39:29.443 "num_blocks": 26476544, 00:39:29.443 "uuid": "376d9fb5-37ba-413f-8037-5b834b76fa46", 00:39:29.443 "assigned_rate_limits": { 00:39:29.443 "rw_ios_per_sec": 0, 00:39:29.443 "rw_mbytes_per_sec": 0, 00:39:29.443 "r_mbytes_per_sec": 0, 00:39:29.443 "w_mbytes_per_sec": 0 00:39:29.443 }, 00:39:29.443 "claimed": false, 00:39:29.443 "zoned": false, 00:39:29.443 "supported_io_types": { 00:39:29.443 "read": true, 00:39:29.443 "write": true, 00:39:29.443 "unmap": true, 00:39:29.443 "flush": false, 00:39:29.443 "reset": true, 00:39:29.443 "nvme_admin": false, 00:39:29.443 "nvme_io": false, 00:39:29.443 "nvme_io_md": false, 00:39:29.443 "write_zeroes": true, 00:39:29.443 "zcopy": false, 00:39:29.443 "get_zone_info": false, 00:39:29.443 "zone_management": false, 00:39:29.443 "zone_append": false, 00:39:29.443 "compare": false, 00:39:29.443 "compare_and_write": false, 00:39:29.443 "abort": false, 00:39:29.443 "seek_hole": true, 00:39:29.443 "seek_data": true, 00:39:29.443 "copy": false, 00:39:29.443 "nvme_iov_md": false 00:39:29.443 }, 00:39:29.443 "driver_specific": { 00:39:29.443 "lvol": { 00:39:29.443 "lvol_store_uuid": "b1b7edb3-4d64-403d-bdd0-3bf42be2b524", 00:39:29.443 "base_bdev": "nvme0n1", 00:39:29.443 "thin_provision": true, 00:39:29.443 "num_allocated_clusters": 0, 00:39:29.443 "snapshot": false, 00:39:29.443 "clone": false, 00:39:29.443 "esnap_clone": false 00:39:29.443 } 00:39:29.443 } 00:39:29.443 } 00:39:29.443 ]' 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:39:29.443 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:39:29.702 16:04:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:29.960 { 00:39:29.960 "name": "376d9fb5-37ba-413f-8037-5b834b76fa46", 00:39:29.960 "aliases": [ 00:39:29.960 "lvs/nvme0n1p0" 00:39:29.960 ], 00:39:29.960 "product_name": "Logical Volume", 00:39:29.960 "block_size": 4096, 00:39:29.960 "num_blocks": 26476544, 00:39:29.960 "uuid": "376d9fb5-37ba-413f-8037-5b834b76fa46", 00:39:29.960 "assigned_rate_limits": { 00:39:29.960 "rw_ios_per_sec": 0, 00:39:29.960 "rw_mbytes_per_sec": 0, 00:39:29.960 "r_mbytes_per_sec": 0, 00:39:29.960 "w_mbytes_per_sec": 0 00:39:29.960 }, 00:39:29.960 "claimed": false, 00:39:29.960 "zoned": false, 00:39:29.960 "supported_io_types": { 00:39:29.960 "read": true, 00:39:29.960 "write": true, 00:39:29.960 "unmap": true, 00:39:29.960 "flush": false, 00:39:29.960 "reset": true, 00:39:29.960 "nvme_admin": false, 00:39:29.960 "nvme_io": false, 00:39:29.960 "nvme_io_md": false, 00:39:29.960 "write_zeroes": true, 00:39:29.960 "zcopy": false, 00:39:29.960 "get_zone_info": false, 00:39:29.960 "zone_management": false, 00:39:29.960 "zone_append": false, 00:39:29.960 "compare": false, 00:39:29.960 "compare_and_write": false, 00:39:29.960 "abort": false, 00:39:29.960 "seek_hole": true, 00:39:29.960 "seek_data": true, 00:39:29.960 "copy": false, 00:39:29.960 "nvme_iov_md": false 00:39:29.960 }, 00:39:29.960 "driver_specific": { 00:39:29.960 "lvol": { 00:39:29.960 "lvol_store_uuid": "b1b7edb3-4d64-403d-bdd0-3bf42be2b524", 00:39:29.960 "base_bdev": "nvme0n1", 00:39:29.960 "thin_provision": true, 00:39:29.960 "num_allocated_clusters": 0, 00:39:29.960 "snapshot": false, 00:39:29.960 "clone": false, 00:39:29.960 "esnap_clone": false 00:39:29.960 } 00:39:29.960 } 00:39:29.960 } 00:39:29.960 ]' 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:39:29.960 16:04:51 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:39:30.218 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 376d9fb5-37ba-413f-8037-5b834b76fa46 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:30.477 { 00:39:30.477 "name": "376d9fb5-37ba-413f-8037-5b834b76fa46", 00:39:30.477 "aliases": [ 00:39:30.477 "lvs/nvme0n1p0" 00:39:30.477 ], 00:39:30.477 "product_name": "Logical Volume", 00:39:30.477 "block_size": 4096, 00:39:30.477 "num_blocks": 26476544, 00:39:30.477 "uuid": "376d9fb5-37ba-413f-8037-5b834b76fa46", 00:39:30.477 "assigned_rate_limits": { 00:39:30.477 "rw_ios_per_sec": 0, 00:39:30.477 "rw_mbytes_per_sec": 0, 00:39:30.477 "r_mbytes_per_sec": 0, 00:39:30.477 "w_mbytes_per_sec": 0 00:39:30.477 }, 00:39:30.477 "claimed": false, 00:39:30.477 "zoned": false, 00:39:30.477 "supported_io_types": { 00:39:30.477 "read": true, 00:39:30.477 "write": true, 00:39:30.477 "unmap": true, 00:39:30.477 "flush": false, 00:39:30.477 "reset": true, 00:39:30.477 "nvme_admin": false, 00:39:30.477 "nvme_io": false, 00:39:30.477 "nvme_io_md": false, 00:39:30.477 "write_zeroes": true, 00:39:30.477 "zcopy": false, 00:39:30.477 "get_zone_info": false, 00:39:30.477 "zone_management": false, 00:39:30.477 "zone_append": false, 00:39:30.477 "compare": false, 00:39:30.477 "compare_and_write": false, 00:39:30.477 "abort": false, 00:39:30.477 "seek_hole": true, 00:39:30.477 "seek_data": true, 00:39:30.477 "copy": false, 00:39:30.477 "nvme_iov_md": false 00:39:30.477 }, 00:39:30.477 "driver_specific": { 00:39:30.477 "lvol": { 00:39:30.477 "lvol_store_uuid": "b1b7edb3-4d64-403d-bdd0-3bf42be2b524", 00:39:30.477 "base_bdev": "nvme0n1", 00:39:30.477 "thin_provision": true, 00:39:30.477 "num_allocated_clusters": 0, 00:39:30.477 "snapshot": false, 00:39:30.477 "clone": false, 00:39:30.477 "esnap_clone": false 00:39:30.477 } 00:39:30.477 } 00:39:30.477 } 00:39:30.477 ]' 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:39:30.477 16:04:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 376d9fb5-37ba-413f-8037-5b834b76fa46 -c nvc0n1p0 --l2p_dram_limit 20 00:39:30.739 [2024-11-05 16:04:51.851842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.851882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:30.739 [2024-11-05 16:04:51.851893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:30.739 [2024-11-05 16:04:51.851901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.851942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.851952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:30.739 [2024-11-05 16:04:51.851959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:39:30.739 [2024-11-05 16:04:51.851966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.851979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:30.739 [2024-11-05 16:04:51.852536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:30.739 [2024-11-05 16:04:51.852557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.852565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:30.739 [2024-11-05 16:04:51.852572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:39:30.739 [2024-11-05 16:04:51.852579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.852670] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 05f601f3-1da6-49e6-b055-d7b08b3365e5 00:39:30.739 [2024-11-05 16:04:51.853603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.853633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:39:30.739 [2024-11-05 16:04:51.853642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:39:30.739 [2024-11-05 16:04:51.853651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.858379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.858403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:30.739 [2024-11-05 16:04:51.858412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:39:30.739 [2024-11-05 16:04:51.858418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.858487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.858494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:30.739 [2024-11-05 16:04:51.858504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:39:30.739 [2024-11-05 16:04:51.858510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.858541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.858548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:30.739 [2024-11-05 16:04:51.858556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:30.739 [2024-11-05 16:04:51.858562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.858578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:30.739 [2024-11-05 16:04:51.861408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.861432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:30.739 [2024-11-05 16:04:51.861439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:39:30.739 [2024-11-05 16:04:51.861448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.861472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.861480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:30.739 [2024-11-05 16:04:51.861486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:30.739 [2024-11-05 16:04:51.861494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.861511] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:39:30.739 [2024-11-05 16:04:51.861618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:30.739 [2024-11-05 16:04:51.861627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:30.739 [2024-11-05 16:04:51.861637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:30.739 [2024-11-05 16:04:51.861644] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:30.739 [2024-11-05 16:04:51.861653] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:30.739 [2024-11-05 16:04:51.861659] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:30.739 [2024-11-05 16:04:51.861666] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:30.739 [2024-11-05 16:04:51.861672] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:30.739 [2024-11-05 16:04:51.861679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:30.739 [2024-11-05 16:04:51.861684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.861693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:30.739 [2024-11-05 16:04:51.861699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:39:30.739 [2024-11-05 16:04:51.861705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.861774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.739 [2024-11-05 16:04:51.861784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:30.739 [2024-11-05 16:04:51.861790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:39:30.739 [2024-11-05 16:04:51.861798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.739 [2024-11-05 16:04:51.861865] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:30.739 [2024-11-05 16:04:51.861874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:30.739 [2024-11-05 16:04:51.861882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:30.739 [2024-11-05 16:04:51.861889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.739 [2024-11-05 16:04:51.861894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:30.739 [2024-11-05 16:04:51.861901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:30.739 [2024-11-05 16:04:51.861906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:30.739 [2024-11-05 16:04:51.861913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:30.739 [2024-11-05 16:04:51.861918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:30.739 [2024-11-05 16:04:51.861924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:30.739 [2024-11-05 16:04:51.861929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:30.739 [2024-11-05 16:04:51.861935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:30.739 [2024-11-05 16:04:51.861940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:30.739 [2024-11-05 16:04:51.861952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:30.739 [2024-11-05 16:04:51.861957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:30.739 [2024-11-05 16:04:51.861965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.739 [2024-11-05 16:04:51.861970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:30.739 [2024-11-05 16:04:51.861976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:30.739 [2024-11-05 16:04:51.861981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.739 [2024-11-05 16:04:51.861989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:30.740 [2024-11-05 16:04:51.861994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.740 [2024-11-05 16:04:51.862005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:30.740 [2024-11-05 16:04:51.862012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.740 [2024-11-05 16:04:51.862022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:30.740 [2024-11-05 16:04:51.862027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.740 [2024-11-05 16:04:51.862038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:30.740 [2024-11-05 16:04:51.862045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:30.740 [2024-11-05 16:04:51.862058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:30.740 [2024-11-05 16:04:51.862063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:30.740 [2024-11-05 16:04:51.862074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:30.740 [2024-11-05 16:04:51.862082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:30.740 [2024-11-05 16:04:51.862087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:30.740 [2024-11-05 16:04:51.862094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:30.740 [2024-11-05 16:04:51.862099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:30.740 [2024-11-05 16:04:51.862105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:30.740 [2024-11-05 16:04:51.862117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:30.740 [2024-11-05 16:04:51.862122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862128] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:30.740 [2024-11-05 16:04:51.862133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:30.740 [2024-11-05 16:04:51.862140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:30.740 [2024-11-05 16:04:51.862146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:30.740 [2024-11-05 16:04:51.862155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:30.740 [2024-11-05 16:04:51.862160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:30.740 [2024-11-05 16:04:51.862166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:30.740 [2024-11-05 16:04:51.862171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:30.740 [2024-11-05 16:04:51.862178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:30.740 [2024-11-05 16:04:51.862183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:30.740 [2024-11-05 16:04:51.862191] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:30.740 [2024-11-05 16:04:51.862198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:30.740 [2024-11-05 16:04:51.862211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:30.740 [2024-11-05 16:04:51.862217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:30.740 [2024-11-05 16:04:51.862222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:30.740 [2024-11-05 16:04:51.862229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:30.740 [2024-11-05 16:04:51.862234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:30.740 [2024-11-05 16:04:51.862242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:30.740 [2024-11-05 16:04:51.862248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:30.740 [2024-11-05 16:04:51.862256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:30.740 [2024-11-05 16:04:51.862261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:30.740 [2024-11-05 16:04:51.862293] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:30.740 [2024-11-05 16:04:51.862308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:30.740 [2024-11-05 16:04:51.862323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:30.740 [2024-11-05 16:04:51.862330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:30.740 [2024-11-05 16:04:51.862335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:30.740 [2024-11-05 16:04:51.862342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:30.740 [2024-11-05 16:04:51.862349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:30.740 [2024-11-05 16:04:51.862356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:39:30.740 [2024-11-05 16:04:51.862362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:30.740 [2024-11-05 16:04:51.862388] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:39:30.740 [2024-11-05 16:04:51.862395] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:39:33.282 [2024-11-05 16:04:54.435008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.435070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:39:33.282 [2024-11-05 16:04:54.435092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2572.604 ms 00:39:33.282 [2024-11-05 16:04:54.435101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.460469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.460637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:33.282 [2024-11-05 16:04:54.460659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.166 ms 00:39:33.282 [2024-11-05 16:04:54.460668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.460802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.460814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:33.282 [2024-11-05 16:04:54.460826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:39:33.282 [2024-11-05 16:04:54.460834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.501816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.501955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:33.282 [2024-11-05 16:04:54.502024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.935 ms 00:39:33.282 [2024-11-05 16:04:54.502048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.502094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.502119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:33.282 [2024-11-05 16:04:54.502140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:33.282 [2024-11-05 16:04:54.502159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.502540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.502579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:33.282 [2024-11-05 16:04:54.502655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:39:33.282 [2024-11-05 16:04:54.502677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.502814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.502869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:33.282 [2024-11-05 16:04:54.502896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:39:33.282 [2024-11-05 16:04:54.502915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.515812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.515918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:33.282 [2024-11-05 16:04:54.515972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.841 ms 00:39:33.282 [2024-11-05 16:04:54.515995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.527294] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:39:33.282 [2024-11-05 16:04:54.532402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.532505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:33.282 [2024-11-05 16:04:54.532555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.332 ms 00:39:33.282 [2024-11-05 16:04:54.532579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.602054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.602212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:39:33.282 [2024-11-05 16:04:54.602273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.440 ms 00:39:33.282 [2024-11-05 16:04:54.602308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.602490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.602581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:33.282 [2024-11-05 16:04:54.602606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:39:33.282 [2024-11-05 16:04:54.602626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.282 [2024-11-05 16:04:54.626017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.282 [2024-11-05 16:04:54.626128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:39:33.282 [2024-11-05 16:04:54.626178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.348 ms 00:39:33.282 [2024-11-05 16:04:54.626201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.649535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.649638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:39:33.543 [2024-11-05 16:04:54.649698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.240 ms 00:39:33.543 [2024-11-05 16:04:54.649720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.650292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.650377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:33.543 [2024-11-05 16:04:54.650422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:39:33.543 [2024-11-05 16:04:54.650445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.721972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.722103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:39:33.543 [2024-11-05 16:04:54.722153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.442 ms 00:39:33.543 [2024-11-05 16:04:54.722177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.746934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.747050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:39:33.543 [2024-11-05 16:04:54.747099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.641 ms 00:39:33.543 [2024-11-05 16:04:54.747127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.770607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.770723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:39:33.543 [2024-11-05 16:04:54.770789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.426 ms 00:39:33.543 [2024-11-05 16:04:54.770814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.794676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.794805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:33.543 [2024-11-05 16:04:54.794854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.764 ms 00:39:33.543 [2024-11-05 16:04:54.794879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.794954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.794984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:33.543 [2024-11-05 16:04:54.795004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:33.543 [2024-11-05 16:04:54.795023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.795108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:33.543 [2024-11-05 16:04:54.795323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:33.543 [2024-11-05 16:04:54.795348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:39:33.543 [2024-11-05 16:04:54.795368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:33.543 [2024-11-05 16:04:54.796217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2943.965 ms, result 0 00:39:33.543 { 00:39:33.543 "name": "ftl0", 00:39:33.543 "uuid": "05f601f3-1da6-49e6-b055-d7b08b3365e5" 00:39:33.543 } 00:39:33.543 16:04:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:39:33.543 16:04:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:39:33.543 16:04:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:39:33.806 16:04:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:39:33.806 [2024-11-05 16:04:55.124468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:39:33.806 I/O size of 69632 is greater than zero copy threshold (65536). 00:39:33.806 Zero copy mechanism will not be used. 00:39:33.806 Running I/O for 4 seconds... 00:39:36.124 650.00 IOPS, 43.16 MiB/s [2024-11-05T16:04:58.425Z] 1985.00 IOPS, 131.82 MiB/s [2024-11-05T16:04:59.364Z] 1913.00 IOPS, 127.04 MiB/s [2024-11-05T16:04:59.364Z] 1899.50 IOPS, 126.14 MiB/s 00:39:38.002 Latency(us) 00:39:38.002 [2024-11-05T16:04:59.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.003 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:39:38.003 ftl0 : 4.00 1898.49 126.07 0.00 0.00 556.45 148.87 3075.15 00:39:38.003 [2024-11-05T16:04:59.365Z] =================================================================================================================== 00:39:38.003 [2024-11-05T16:04:59.365Z] Total : 1898.49 126.07 0.00 0.00 556.45 148.87 3075.15 00:39:38.003 [2024-11-05 16:04:59.136606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:39:38.003 { 00:39:38.003 "results": [ 00:39:38.003 { 00:39:38.003 "job": "ftl0", 00:39:38.003 "core_mask": "0x1", 00:39:38.003 "workload": "randwrite", 00:39:38.003 "status": "finished", 00:39:38.003 "queue_depth": 1, 00:39:38.003 "io_size": 69632, 00:39:38.003 "runtime": 4.002657, 00:39:38.003 "iops": 1898.4889287290916, 00:39:38.003 "mibps": 126.07153042341625, 00:39:38.003 "io_failed": 0, 00:39:38.003 "io_timeout": 0, 00:39:38.003 "avg_latency_us": 556.454963507344, 00:39:38.003 "min_latency_us": 148.87384615384616, 00:39:38.003 "max_latency_us": 3075.150769230769 00:39:38.003 } 00:39:38.003 ], 00:39:38.003 "core_count": 1 00:39:38.003 } 00:39:38.003 16:04:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:39:38.003 [2024-11-05 16:04:59.257134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:39:38.003 Running I/O for 4 seconds... 00:39:40.333 5750.00 IOPS, 22.46 MiB/s [2024-11-05T16:05:02.638Z] 5753.00 IOPS, 22.47 MiB/s [2024-11-05T16:05:03.587Z] 5310.33 IOPS, 20.74 MiB/s [2024-11-05T16:05:03.587Z] 5142.00 IOPS, 20.09 MiB/s 00:39:42.225 Latency(us) 00:39:42.225 [2024-11-05T16:05:03.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:42.225 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:39:42.225 ftl0 : 4.04 5119.35 20.00 0.00 0.00 24883.00 335.56 52025.50 00:39:42.225 [2024-11-05T16:05:03.587Z] =================================================================================================================== 00:39:42.225 [2024-11-05T16:05:03.587Z] Total : 5119.35 20.00 0.00 0.00 24883.00 0.00 52025.50 00:39:42.225 [2024-11-05 16:05:03.308817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:39:42.225 { 00:39:42.225 "results": [ 00:39:42.225 { 00:39:42.225 "job": "ftl0", 00:39:42.225 "core_mask": "0x1", 00:39:42.225 "workload": "randwrite", 00:39:42.226 "status": "finished", 00:39:42.226 "queue_depth": 128, 00:39:42.226 "io_size": 4096, 00:39:42.226 "runtime": 4.041135, 00:39:42.226 "iops": 5119.353844897535, 00:39:42.226 "mibps": 19.997475956630996, 00:39:42.226 "io_failed": 0, 00:39:42.226 "io_timeout": 0, 00:39:42.226 "avg_latency_us": 24882.99565232911, 00:39:42.226 "min_latency_us": 335.55692307692306, 00:39:42.226 "max_latency_us": 52025.50153846154 00:39:42.226 } 00:39:42.226 ], 00:39:42.226 "core_count": 1 00:39:42.226 } 00:39:42.226 16:05:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:39:42.226 [2024-11-05 16:05:03.429658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:39:42.226 Running I/O for 4 seconds... 00:39:44.117 4229.00 IOPS, 16.52 MiB/s [2024-11-05T16:05:06.863Z] 4132.50 IOPS, 16.14 MiB/s [2024-11-05T16:05:07.797Z] 4364.00 IOPS, 17.05 MiB/s [2024-11-05T16:05:07.797Z] 5099.75 IOPS, 19.92 MiB/s 00:39:46.435 Latency(us) 00:39:46.435 [2024-11-05T16:05:07.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.435 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:46.435 Verification LBA range: start 0x0 length 0x1400000 00:39:46.435 ftl0 : 4.01 5114.99 19.98 0.00 0.00 24967.65 226.86 41338.09 00:39:46.435 [2024-11-05T16:05:07.797Z] =================================================================================================================== 00:39:46.435 [2024-11-05T16:05:07.797Z] Total : 5114.99 19.98 0.00 0.00 24967.65 0.00 41338.09 00:39:46.435 [2024-11-05 16:05:07.450483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:39:46.435 { 00:39:46.435 "results": [ 00:39:46.435 { 00:39:46.435 "job": "ftl0", 00:39:46.435 "core_mask": "0x1", 00:39:46.435 "workload": "verify", 00:39:46.435 "status": "finished", 00:39:46.435 "verify_range": { 00:39:46.435 "start": 0, 00:39:46.435 "length": 20971520 00:39:46.435 }, 00:39:46.435 "queue_depth": 128, 00:39:46.435 "io_size": 4096, 00:39:46.435 "runtime": 4.006852, 00:39:46.435 "iops": 5114.988025512297, 00:39:46.435 "mibps": 19.98042197465741, 00:39:46.435 "io_failed": 0, 00:39:46.435 "io_timeout": 0, 00:39:46.435 "avg_latency_us": 24967.653836470432, 00:39:46.435 "min_latency_us": 226.85538461538462, 00:39:46.435 "max_latency_us": 41338.092307692306 00:39:46.435 } 00:39:46.435 ], 00:39:46.435 "core_count": 1 00:39:46.435 } 00:39:46.435 16:05:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:39:46.435 [2024-11-05 16:05:07.642099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.435 [2024-11-05 16:05:07.642138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:46.435 [2024-11-05 16:05:07.642150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:46.435 [2024-11-05 16:05:07.642157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.435 [2024-11-05 16:05:07.642175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:46.435 [2024-11-05 16:05:07.644257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.435 [2024-11-05 16:05:07.644281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:46.435 [2024-11-05 16:05:07.644291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:39:46.435 [2024-11-05 16:05:07.644298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.435 [2024-11-05 16:05:07.645851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.435 [2024-11-05 16:05:07.645877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:46.435 [2024-11-05 16:05:07.645886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:39:46.435 [2024-11-05 16:05:07.645892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.435 [2024-11-05 16:05:07.763459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.435 [2024-11-05 16:05:07.763489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:46.435 [2024-11-05 16:05:07.763502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.547 ms 00:39:46.435 [2024-11-05 16:05:07.763509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.435 [2024-11-05 16:05:07.768258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.435 [2024-11-05 16:05:07.768280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:46.435 [2024-11-05 16:05:07.768289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:39:46.435 [2024-11-05 16:05:07.768295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.435 [2024-11-05 16:05:07.786125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.435 [2024-11-05 16:05:07.786151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:46.435 [2024-11-05 16:05:07.786161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.796 ms 00:39:46.435 [2024-11-05 16:05:07.786167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.798072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.695 [2024-11-05 16:05:07.798101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:46.695 [2024-11-05 16:05:07.798115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.877 ms 00:39:46.695 [2024-11-05 16:05:07.798121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.798228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.695 [2024-11-05 16:05:07.798236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:46.695 [2024-11-05 16:05:07.798247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:39:46.695 [2024-11-05 16:05:07.798252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.816102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.695 [2024-11-05 16:05:07.816126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:46.695 [2024-11-05 16:05:07.816135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:39:46.695 [2024-11-05 16:05:07.816141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.833290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.695 [2024-11-05 16:05:07.833394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:46.695 [2024-11-05 16:05:07.833409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.122 ms 00:39:46.695 [2024-11-05 16:05:07.833415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.850404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.695 [2024-11-05 16:05:07.850489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:46.695 [2024-11-05 16:05:07.850532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.965 ms 00:39:46.695 [2024-11-05 16:05:07.850549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.867485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.695 [2024-11-05 16:05:07.867564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:46.695 [2024-11-05 16:05:07.867605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.879 ms 00:39:46.695 [2024-11-05 16:05:07.867621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.695 [2024-11-05 16:05:07.867651] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:46.695 [2024-11-05 16:05:07.867673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.867979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:46.695 [2024-11-05 16:05:07.868331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.868978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.869992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:46.696 [2024-11-05 16:05:07.870775] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:46.696 [2024-11-05 16:05:07.870792] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 05f601f3-1da6-49e6-b055-d7b08b3365e5 00:39:46.696 [2024-11-05 16:05:07.870814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:46.696 [2024-11-05 16:05:07.870831] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:46.696 [2024-11-05 16:05:07.870874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:46.696 [2024-11-05 16:05:07.870893] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:46.696 [2024-11-05 16:05:07.870907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:46.696 [2024-11-05 16:05:07.870925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:46.696 [2024-11-05 16:05:07.870940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:46.696 [2024-11-05 16:05:07.870956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:46.696 [2024-11-05 16:05:07.870969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:46.697 [2024-11-05 16:05:07.870985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.697 [2024-11-05 16:05:07.871026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:46.697 [2024-11-05 16:05:07.871046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:39:46.697 [2024-11-05 16:05:07.871061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.880309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.697 [2024-11-05 16:05:07.880389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:46.697 [2024-11-05 16:05:07.880446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.215 ms 00:39:46.697 [2024-11-05 16:05:07.880464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.880753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.697 [2024-11-05 16:05:07.880839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:46.697 [2024-11-05 16:05:07.880880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:39:46.697 [2024-11-05 16:05:07.880897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.908227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:07.908309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:46.697 [2024-11-05 16:05:07.908351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:07.908370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.908420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:07.908453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:46.697 [2024-11-05 16:05:07.908474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:07.908488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.908584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:07.908613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:46.697 [2024-11-05 16:05:07.908631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:07.908645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.908705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:07.908725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:46.697 [2024-11-05 16:05:07.908754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:07.908770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:07.968240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:07.968348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:46.697 [2024-11-05 16:05:07.968390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:07.968408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.016866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.016985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:46.697 [2024-11-05 16:05:08.017024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.017126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:46.697 [2024-11-05 16:05:08.017144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.017231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:46.697 [2024-11-05 16:05:08.017248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.017379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:46.697 [2024-11-05 16:05:08.017391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.017432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:46.697 [2024-11-05 16:05:08.017439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.017479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:46.697 [2024-11-05 16:05:08.017487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:46.697 [2024-11-05 16:05:08.017539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:46.697 [2024-11-05 16:05:08.017547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:46.697 [2024-11-05 16:05:08.017552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.697 [2024-11-05 16:05:08.017648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.519 ms, result 0 00:39:46.697 true 00:39:46.697 16:05:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73116 00:39:46.697 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73116 ']' 00:39:46.697 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73116 00:39:46.697 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:39:46.697 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:46.697 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73116 00:39:46.957 killing process with pid 73116 00:39:46.957 Received shutdown signal, test time was about 4.000000 seconds 00:39:46.957 00:39:46.957 Latency(us) 00:39:46.957 [2024-11-05T16:05:08.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.957 [2024-11-05T16:05:08.319Z] =================================================================================================================== 00:39:46.957 [2024-11-05T16:05:08.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:46.957 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:46.957 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:46.957 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73116' 00:39:46.957 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73116 00:39:46.957 16:05:08 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73116 00:39:51.208 Remove shared memory files 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:39:51.208 ************************************ 00:39:51.208 END TEST ftl_bdevperf 00:39:51.208 ************************************ 00:39:51.208 00:39:51.208 real 0m24.261s 00:39:51.208 user 0m26.955s 00:39:51.208 sys 0m0.913s 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:51.208 16:05:12 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:51.208 16:05:12 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:39:51.208 16:05:12 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:51.208 16:05:12 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:51.208 16:05:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:51.208 ************************************ 00:39:51.208 START TEST ftl_trim 00:39:51.208 ************************************ 00:39:51.208 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:39:51.208 * Looking for test storage... 00:39:51.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.209 16:05:12 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:51.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.209 --rc genhtml_branch_coverage=1 00:39:51.209 --rc genhtml_function_coverage=1 00:39:51.209 --rc genhtml_legend=1 00:39:51.209 --rc geninfo_all_blocks=1 00:39:51.209 --rc geninfo_unexecuted_blocks=1 00:39:51.209 00:39:51.209 ' 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:51.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.209 --rc genhtml_branch_coverage=1 00:39:51.209 --rc genhtml_function_coverage=1 00:39:51.209 --rc genhtml_legend=1 00:39:51.209 --rc geninfo_all_blocks=1 00:39:51.209 --rc geninfo_unexecuted_blocks=1 00:39:51.209 00:39:51.209 ' 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:51.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.209 --rc genhtml_branch_coverage=1 00:39:51.209 --rc genhtml_function_coverage=1 00:39:51.209 --rc genhtml_legend=1 00:39:51.209 --rc geninfo_all_blocks=1 00:39:51.209 --rc geninfo_unexecuted_blocks=1 00:39:51.209 00:39:51.209 ' 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:51.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.209 --rc genhtml_branch_coverage=1 00:39:51.209 --rc genhtml_function_coverage=1 00:39:51.209 --rc genhtml_legend=1 00:39:51.209 --rc geninfo_all_blocks=1 00:39:51.209 --rc geninfo_unexecuted_blocks=1 00:39:51.209 00:39:51.209 ' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73455 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73455 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73455 ']' 00:39:51.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.209 16:05:12 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:51.209 16:05:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:51.209 [2024-11-05 16:05:12.384538] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:39:51.209 [2024-11-05 16:05:12.384954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73455 ] 00:39:51.209 [2024-11-05 16:05:12.545202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:51.468 [2024-11-05 16:05:12.634834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.468 [2024-11-05 16:05:12.634897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.468 [2024-11-05 16:05:12.634922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:52.037 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:52.037 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:39:52.037 16:05:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:39:52.037 16:05:13 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:39:52.037 16:05:13 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:52.037 16:05:13 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:39:52.037 16:05:13 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:39:52.037 16:05:13 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:52.297 16:05:13 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:39:52.297 16:05:13 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:39:52.297 16:05:13 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:39:52.297 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:39:52.297 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:52.297 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:39:52.297 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:39:52.297 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:52.559 { 00:39:52.559 "name": "nvme0n1", 00:39:52.559 "aliases": [ 00:39:52.559 "4f5c7fad-5b7e-4403-9f89-a6896a30e829" 00:39:52.559 ], 00:39:52.559 "product_name": "NVMe disk", 00:39:52.559 "block_size": 4096, 00:39:52.559 "num_blocks": 1310720, 00:39:52.559 "uuid": "4f5c7fad-5b7e-4403-9f89-a6896a30e829", 00:39:52.559 "numa_id": -1, 00:39:52.559 "assigned_rate_limits": { 00:39:52.559 "rw_ios_per_sec": 0, 00:39:52.559 "rw_mbytes_per_sec": 0, 00:39:52.559 "r_mbytes_per_sec": 0, 00:39:52.559 "w_mbytes_per_sec": 0 00:39:52.559 }, 00:39:52.559 "claimed": true, 00:39:52.559 "claim_type": "read_many_write_one", 00:39:52.559 "zoned": false, 00:39:52.559 "supported_io_types": { 00:39:52.559 "read": true, 00:39:52.559 "write": true, 00:39:52.559 "unmap": true, 00:39:52.559 "flush": true, 00:39:52.559 "reset": true, 00:39:52.559 "nvme_admin": true, 00:39:52.559 "nvme_io": true, 00:39:52.559 "nvme_io_md": false, 00:39:52.559 "write_zeroes": true, 00:39:52.559 "zcopy": false, 00:39:52.559 "get_zone_info": false, 00:39:52.559 "zone_management": false, 00:39:52.559 "zone_append": false, 00:39:52.559 "compare": true, 00:39:52.559 "compare_and_write": false, 00:39:52.559 "abort": true, 00:39:52.559 "seek_hole": false, 00:39:52.559 "seek_data": false, 00:39:52.559 "copy": true, 00:39:52.559 "nvme_iov_md": false 00:39:52.559 }, 00:39:52.559 "driver_specific": { 00:39:52.559 "nvme": [ 00:39:52.559 { 00:39:52.559 "pci_address": "0000:00:11.0", 00:39:52.559 "trid": { 00:39:52.559 "trtype": "PCIe", 00:39:52.559 "traddr": "0000:00:11.0" 00:39:52.559 }, 00:39:52.559 "ctrlr_data": { 00:39:52.559 "cntlid": 0, 00:39:52.559 "vendor_id": "0x1b36", 00:39:52.559 "model_number": "QEMU NVMe Ctrl", 00:39:52.559 "serial_number": "12341", 00:39:52.559 "firmware_revision": "8.0.0", 00:39:52.559 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:52.559 "oacs": { 00:39:52.559 "security": 0, 00:39:52.559 "format": 1, 00:39:52.559 "firmware": 0, 00:39:52.559 "ns_manage": 1 00:39:52.559 }, 00:39:52.559 "multi_ctrlr": false, 00:39:52.559 "ana_reporting": false 00:39:52.559 }, 00:39:52.559 "vs": { 00:39:52.559 "nvme_version": "1.4" 00:39:52.559 }, 00:39:52.559 "ns_data": { 00:39:52.559 "id": 1, 00:39:52.559 "can_share": false 00:39:52.559 } 00:39:52.559 } 00:39:52.559 ], 00:39:52.559 "mp_policy": "active_passive" 00:39:52.559 } 00:39:52.559 } 00:39:52.559 ]' 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:39:52.559 16:05:13 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:39:52.559 16:05:13 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:39:52.559 16:05:13 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:39:52.559 16:05:13 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:39:52.559 16:05:13 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:52.559 16:05:13 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:52.820 16:05:14 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b1b7edb3-4d64-403d-bdd0-3bf42be2b524 00:39:52.820 16:05:14 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:39:52.820 16:05:14 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1b7edb3-4d64-403d-bdd0-3bf42be2b524 00:39:53.081 16:05:14 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:39:53.342 16:05:14 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5de44d8f-b151-473f-9339-a2e3219ee37e 00:39:53.342 16:05:14 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5de44d8f-b151-473f-9339-a2e3219ee37e 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:39:53.604 16:05:14 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:53.604 { 00:39:53.604 "name": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:53.604 "aliases": [ 00:39:53.604 "lvs/nvme0n1p0" 00:39:53.604 ], 00:39:53.604 "product_name": "Logical Volume", 00:39:53.604 "block_size": 4096, 00:39:53.604 "num_blocks": 26476544, 00:39:53.604 "uuid": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:53.604 "assigned_rate_limits": { 00:39:53.604 "rw_ios_per_sec": 0, 00:39:53.604 "rw_mbytes_per_sec": 0, 00:39:53.604 "r_mbytes_per_sec": 0, 00:39:53.604 "w_mbytes_per_sec": 0 00:39:53.604 }, 00:39:53.604 "claimed": false, 00:39:53.604 "zoned": false, 00:39:53.604 "supported_io_types": { 00:39:53.604 "read": true, 00:39:53.604 "write": true, 00:39:53.604 "unmap": true, 00:39:53.604 "flush": false, 00:39:53.604 "reset": true, 00:39:53.604 "nvme_admin": false, 00:39:53.604 "nvme_io": false, 00:39:53.604 "nvme_io_md": false, 00:39:53.604 "write_zeroes": true, 00:39:53.604 "zcopy": false, 00:39:53.604 "get_zone_info": false, 00:39:53.604 "zone_management": false, 00:39:53.604 "zone_append": false, 00:39:53.604 "compare": false, 00:39:53.604 "compare_and_write": false, 00:39:53.604 "abort": false, 00:39:53.604 "seek_hole": true, 00:39:53.604 "seek_data": true, 00:39:53.604 "copy": false, 00:39:53.604 "nvme_iov_md": false 00:39:53.604 }, 00:39:53.604 "driver_specific": { 00:39:53.604 "lvol": { 00:39:53.604 "lvol_store_uuid": "5de44d8f-b151-473f-9339-a2e3219ee37e", 00:39:53.604 "base_bdev": "nvme0n1", 00:39:53.604 "thin_provision": true, 00:39:53.604 "num_allocated_clusters": 0, 00:39:53.604 "snapshot": false, 00:39:53.604 "clone": false, 00:39:53.604 "esnap_clone": false 00:39:53.604 } 00:39:53.604 } 00:39:53.604 } 00:39:53.604 ]' 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:53.604 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:39:53.605 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:53.866 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:53.866 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:53.866 16:05:14 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:39:53.866 16:05:14 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:39:53.866 16:05:14 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:39:53.866 16:05:14 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:39:54.127 16:05:15 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:39:54.127 16:05:15 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:39:54.127 16:05:15 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:54.127 { 00:39:54.127 "name": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:54.127 "aliases": [ 00:39:54.127 "lvs/nvme0n1p0" 00:39:54.127 ], 00:39:54.127 "product_name": "Logical Volume", 00:39:54.127 "block_size": 4096, 00:39:54.127 "num_blocks": 26476544, 00:39:54.127 "uuid": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:54.127 "assigned_rate_limits": { 00:39:54.127 "rw_ios_per_sec": 0, 00:39:54.127 "rw_mbytes_per_sec": 0, 00:39:54.127 "r_mbytes_per_sec": 0, 00:39:54.127 "w_mbytes_per_sec": 0 00:39:54.127 }, 00:39:54.127 "claimed": false, 00:39:54.127 "zoned": false, 00:39:54.127 "supported_io_types": { 00:39:54.127 "read": true, 00:39:54.127 "write": true, 00:39:54.127 "unmap": true, 00:39:54.127 "flush": false, 00:39:54.127 "reset": true, 00:39:54.127 "nvme_admin": false, 00:39:54.127 "nvme_io": false, 00:39:54.127 "nvme_io_md": false, 00:39:54.127 "write_zeroes": true, 00:39:54.127 "zcopy": false, 00:39:54.127 "get_zone_info": false, 00:39:54.127 "zone_management": false, 00:39:54.127 "zone_append": false, 00:39:54.127 "compare": false, 00:39:54.127 "compare_and_write": false, 00:39:54.127 "abort": false, 00:39:54.127 "seek_hole": true, 00:39:54.127 "seek_data": true, 00:39:54.127 "copy": false, 00:39:54.127 "nvme_iov_md": false 00:39:54.127 }, 00:39:54.127 "driver_specific": { 00:39:54.127 "lvol": { 00:39:54.127 "lvol_store_uuid": "5de44d8f-b151-473f-9339-a2e3219ee37e", 00:39:54.127 "base_bdev": "nvme0n1", 00:39:54.127 "thin_provision": true, 00:39:54.127 "num_allocated_clusters": 0, 00:39:54.127 "snapshot": false, 00:39:54.127 "clone": false, 00:39:54.127 "esnap_clone": false 00:39:54.127 } 00:39:54.127 } 00:39:54.127 } 00:39:54.127 ]' 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:39:54.127 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:39:54.387 16:05:15 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:39:54.387 16:05:15 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:39:54.387 16:05:15 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:39:54.387 16:05:15 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:39:54.387 16:05:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:39:54.387 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d0e206e3-b179-42d1-b1c3-cff6b48bc50c 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:54.646 { 00:39:54.646 "name": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:54.646 "aliases": [ 00:39:54.646 "lvs/nvme0n1p0" 00:39:54.646 ], 00:39:54.646 "product_name": "Logical Volume", 00:39:54.646 "block_size": 4096, 00:39:54.646 "num_blocks": 26476544, 00:39:54.646 "uuid": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:54.646 "assigned_rate_limits": { 00:39:54.646 "rw_ios_per_sec": 0, 00:39:54.646 "rw_mbytes_per_sec": 0, 00:39:54.646 "r_mbytes_per_sec": 0, 00:39:54.646 "w_mbytes_per_sec": 0 00:39:54.646 }, 00:39:54.646 "claimed": false, 00:39:54.646 "zoned": false, 00:39:54.646 "supported_io_types": { 00:39:54.646 "read": true, 00:39:54.646 "write": true, 00:39:54.646 "unmap": true, 00:39:54.646 "flush": false, 00:39:54.646 "reset": true, 00:39:54.646 "nvme_admin": false, 00:39:54.646 "nvme_io": false, 00:39:54.646 "nvme_io_md": false, 00:39:54.646 "write_zeroes": true, 00:39:54.646 "zcopy": false, 00:39:54.646 "get_zone_info": false, 00:39:54.646 "zone_management": false, 00:39:54.646 "zone_append": false, 00:39:54.646 "compare": false, 00:39:54.646 "compare_and_write": false, 00:39:54.646 "abort": false, 00:39:54.646 "seek_hole": true, 00:39:54.646 "seek_data": true, 00:39:54.646 "copy": false, 00:39:54.646 "nvme_iov_md": false 00:39:54.646 }, 00:39:54.646 "driver_specific": { 00:39:54.646 "lvol": { 00:39:54.646 "lvol_store_uuid": "5de44d8f-b151-473f-9339-a2e3219ee37e", 00:39:54.646 "base_bdev": "nvme0n1", 00:39:54.646 "thin_provision": true, 00:39:54.646 "num_allocated_clusters": 0, 00:39:54.646 "snapshot": false, 00:39:54.646 "clone": false, 00:39:54.646 "esnap_clone": false 00:39:54.646 } 00:39:54.646 } 00:39:54.646 } 00:39:54.646 ]' 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:54.646 16:05:15 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:39:54.646 16:05:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:39:54.646 16:05:15 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d0e206e3-b179-42d1-b1c3-cff6b48bc50c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:39:54.905 [2024-11-05 16:05:16.179892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.179928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:54.906 [2024-11-05 16:05:16.179943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:54.906 [2024-11-05 16:05:16.179950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.186395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.186495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:54.906 [2024-11-05 16:05:16.186534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.411 ms 00:39:54.906 [2024-11-05 16:05:16.186560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.186951] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:54.906 [2024-11-05 16:05:16.187885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:54.906 [2024-11-05 16:05:16.188080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.188145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:54.906 [2024-11-05 16:05:16.188204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:39:54.906 [2024-11-05 16:05:16.188258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.188473] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:39:54.906 [2024-11-05 16:05:16.190042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.190210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:39:54.906 [2024-11-05 16:05:16.190582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:39:54.906 [2024-11-05 16:05:16.190656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.197904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.198094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:54.906 [2024-11-05 16:05:16.198176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.083 ms 00:39:54.906 [2024-11-05 16:05:16.198230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.198426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.198566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:54.906 [2024-11-05 16:05:16.198650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:39:54.906 [2024-11-05 16:05:16.198721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.198888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.198934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:54.906 [2024-11-05 16:05:16.198981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:54.906 [2024-11-05 16:05:16.199033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.199099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:54.906 [2024-11-05 16:05:16.203202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.203276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:54.906 [2024-11-05 16:05:16.203410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.107 ms 00:39:54.906 [2024-11-05 16:05:16.203465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.203571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.203673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:54.906 [2024-11-05 16:05:16.203730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:54.906 [2024-11-05 16:05:16.203858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.203938] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:39:54.906 [2024-11-05 16:05:16.204171] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:54.906 [2024-11-05 16:05:16.204231] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:54.906 [2024-11-05 16:05:16.204278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:54.906 [2024-11-05 16:05:16.204372] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:54.906 [2024-11-05 16:05:16.204421] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:54.906 [2024-11-05 16:05:16.204463] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:54.906 [2024-11-05 16:05:16.204543] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:54.906 [2024-11-05 16:05:16.204592] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:54.906 [2024-11-05 16:05:16.204638] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:54.906 [2024-11-05 16:05:16.204677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.204767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:54.906 [2024-11-05 16:05:16.204827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:39:54.906 [2024-11-05 16:05:16.204870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.205000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.906 [2024-11-05 16:05:16.205083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:54.906 [2024-11-05 16:05:16.205152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:39:54.906 [2024-11-05 16:05:16.205206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.906 [2024-11-05 16:05:16.205387] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:54.906 [2024-11-05 16:05:16.205491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:54.906 [2024-11-05 16:05:16.205537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:54.906 [2024-11-05 16:05:16.205582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:54.906 [2024-11-05 16:05:16.205721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:54.906 [2024-11-05 16:05:16.205833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:54.906 [2024-11-05 16:05:16.205845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:54.906 [2024-11-05 16:05:16.205861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:54.906 [2024-11-05 16:05:16.205868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:54.906 [2024-11-05 16:05:16.205877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:54.906 [2024-11-05 16:05:16.205883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:54.906 [2024-11-05 16:05:16.205892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:54.906 [2024-11-05 16:05:16.205898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:54.906 [2024-11-05 16:05:16.205916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:54.906 [2024-11-05 16:05:16.205925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:54.906 [2024-11-05 16:05:16.205946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:54.906 [2024-11-05 16:05:16.205961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:54.906 [2024-11-05 16:05:16.205968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:54.906 [2024-11-05 16:05:16.205982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:54.906 [2024-11-05 16:05:16.205990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:54.906 [2024-11-05 16:05:16.205997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:54.906 [2024-11-05 16:05:16.206005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:54.906 [2024-11-05 16:05:16.206012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:54.906 [2024-11-05 16:05:16.206019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:54.906 [2024-11-05 16:05:16.206026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:54.906 [2024-11-05 16:05:16.206036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:54.906 [2024-11-05 16:05:16.206042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:54.906 [2024-11-05 16:05:16.206050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:54.906 [2024-11-05 16:05:16.206057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:54.906 [2024-11-05 16:05:16.206065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:54.906 [2024-11-05 16:05:16.206071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:54.906 [2024-11-05 16:05:16.206079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:54.906 [2024-11-05 16:05:16.206085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.206096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:54.906 [2024-11-05 16:05:16.206105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:54.906 [2024-11-05 16:05:16.206113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.206120] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:54.906 [2024-11-05 16:05:16.206129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:54.906 [2024-11-05 16:05:16.206137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:54.906 [2024-11-05 16:05:16.206146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:54.906 [2024-11-05 16:05:16.206153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:54.907 [2024-11-05 16:05:16.206165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:54.907 [2024-11-05 16:05:16.206172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:54.907 [2024-11-05 16:05:16.206180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:54.907 [2024-11-05 16:05:16.206187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:54.907 [2024-11-05 16:05:16.206196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:54.907 [2024-11-05 16:05:16.206207] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:54.907 [2024-11-05 16:05:16.206218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:54.907 [2024-11-05 16:05:16.206235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:54.907 [2024-11-05 16:05:16.206243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:54.907 [2024-11-05 16:05:16.206252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:54.907 [2024-11-05 16:05:16.206259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:54.907 [2024-11-05 16:05:16.206267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:54.907 [2024-11-05 16:05:16.206274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:54.907 [2024-11-05 16:05:16.206283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:54.907 [2024-11-05 16:05:16.206290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:54.907 [2024-11-05 16:05:16.206310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:54.907 [2024-11-05 16:05:16.206350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:54.907 [2024-11-05 16:05:16.206366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:54.907 [2024-11-05 16:05:16.206384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:54.907 [2024-11-05 16:05:16.206392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:54.907 [2024-11-05 16:05:16.206401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:54.907 [2024-11-05 16:05:16.206409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.907 [2024-11-05 16:05:16.206418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:54.907 [2024-11-05 16:05:16.206426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:39:54.907 [2024-11-05 16:05:16.206435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.907 [2024-11-05 16:05:16.206521] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:39:54.907 [2024-11-05 16:05:16.206534] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:39:57.436 [2024-11-05 16:05:18.499360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.499832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:39:57.436 [2024-11-05 16:05:18.499911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2292.828 ms 00:39:57.436 [2024-11-05 16:05:18.499963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.528337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.528622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:57.436 [2024-11-05 16:05:18.528707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.080 ms 00:39:57.436 [2024-11-05 16:05:18.528777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.528921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.528935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:57.436 [2024-11-05 16:05:18.528944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:39:57.436 [2024-11-05 16:05:18.528956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.576700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.577016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:57.436 [2024-11-05 16:05:18.577192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.690 ms 00:39:57.436 [2024-11-05 16:05:18.577331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.577516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.577607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:57.436 [2024-11-05 16:05:18.577817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:57.436 [2024-11-05 16:05:18.577965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.578819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.578994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:57.436 [2024-11-05 16:05:18.579047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:39:57.436 [2024-11-05 16:05:18.579090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.579239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.579344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:57.436 [2024-11-05 16:05:18.579419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:39:57.436 [2024-11-05 16:05:18.579489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.596257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.596342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:57.436 [2024-11-05 16:05:18.596391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.414 ms 00:39:57.436 [2024-11-05 16:05:18.596436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.608748] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:57.436 [2024-11-05 16:05:18.626087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.626178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:57.436 [2024-11-05 16:05:18.626224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.511 ms 00:39:57.436 [2024-11-05 16:05:18.626267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.690824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.691012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:39:57.436 [2024-11-05 16:05:18.691092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.435 ms 00:39:57.436 [2024-11-05 16:05:18.691144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.691392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.691515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:57.436 [2024-11-05 16:05:18.691590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:39:57.436 [2024-11-05 16:05:18.691690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.715500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.715592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:39:57.436 [2024-11-05 16:05:18.715723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.646 ms 00:39:57.436 [2024-11-05 16:05:18.715795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.738992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.739161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:39:57.436 [2024-11-05 16:05:18.739219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.104 ms 00:39:57.436 [2024-11-05 16:05:18.739267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.436 [2024-11-05 16:05:18.739925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.436 [2024-11-05 16:05:18.740104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:57.436 [2024-11-05 16:05:18.740171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:39:57.436 [2024-11-05 16:05:18.740213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.694 [2024-11-05 16:05:18.812549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.694 [2024-11-05 16:05:18.812822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:39:57.694 [2024-11-05 16:05:18.812952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.246 ms 00:39:57.694 [2024-11-05 16:05:18.813047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.694 [2024-11-05 16:05:18.838589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.695 [2024-11-05 16:05:18.838786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:39:57.695 [2024-11-05 16:05:18.838936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.390 ms 00:39:57.695 [2024-11-05 16:05:18.839041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.695 [2024-11-05 16:05:18.863150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.695 [2024-11-05 16:05:18.863316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:39:57.695 [2024-11-05 16:05:18.863423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.937 ms 00:39:57.695 [2024-11-05 16:05:18.863484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.695 [2024-11-05 16:05:18.887513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.695 [2024-11-05 16:05:18.887682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:57.695 [2024-11-05 16:05:18.887787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.870 ms 00:39:57.695 [2024-11-05 16:05:18.887882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.695 [2024-11-05 16:05:18.888023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.695 [2024-11-05 16:05:18.888100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:57.695 [2024-11-05 16:05:18.888179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:57.695 [2024-11-05 16:05:18.888202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.695 [2024-11-05 16:05:18.888297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:57.695 [2024-11-05 16:05:18.888364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:57.695 [2024-11-05 16:05:18.888389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:39:57.695 [2024-11-05 16:05:18.888409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:57.695 [2024-11-05 16:05:18.889450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:57.695 [2024-11-05 16:05:18.892566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2709.213 ms, result 0 00:39:57.695 [2024-11-05 16:05:18.893839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:57.695 { 00:39:57.695 "name": "ftl0", 00:39:57.695 "uuid": "00f76824-5a3a-4487-9ed4-1ffd3d9e229e" 00:39:57.695 } 00:39:57.695 16:05:18 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:39:57.695 16:05:18 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:39:57.695 16:05:18 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:39:57.695 16:05:18 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:39:57.695 16:05:18 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:39:57.695 16:05:18 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:39:57.695 16:05:18 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:57.953 16:05:19 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:39:57.953 [ 00:39:57.953 { 00:39:57.953 "name": "ftl0", 00:39:57.953 "aliases": [ 00:39:57.953 "00f76824-5a3a-4487-9ed4-1ffd3d9e229e" 00:39:57.953 ], 00:39:57.953 "product_name": "FTL disk", 00:39:57.953 "block_size": 4096, 00:39:57.953 "num_blocks": 23592960, 00:39:57.953 "uuid": "00f76824-5a3a-4487-9ed4-1ffd3d9e229e", 00:39:57.953 "assigned_rate_limits": { 00:39:57.953 "rw_ios_per_sec": 0, 00:39:57.953 "rw_mbytes_per_sec": 0, 00:39:57.953 "r_mbytes_per_sec": 0, 00:39:57.953 "w_mbytes_per_sec": 0 00:39:57.953 }, 00:39:57.953 "claimed": false, 00:39:57.953 "zoned": false, 00:39:57.953 "supported_io_types": { 00:39:57.953 "read": true, 00:39:57.953 "write": true, 00:39:57.953 "unmap": true, 00:39:57.953 "flush": true, 00:39:57.953 "reset": false, 00:39:57.953 "nvme_admin": false, 00:39:57.953 "nvme_io": false, 00:39:57.953 "nvme_io_md": false, 00:39:57.953 "write_zeroes": true, 00:39:57.953 "zcopy": false, 00:39:57.953 "get_zone_info": false, 00:39:57.953 "zone_management": false, 00:39:57.953 "zone_append": false, 00:39:57.953 "compare": false, 00:39:57.953 "compare_and_write": false, 00:39:57.953 "abort": false, 00:39:57.953 "seek_hole": false, 00:39:57.953 "seek_data": false, 00:39:57.953 "copy": false, 00:39:57.953 "nvme_iov_md": false 00:39:57.953 }, 00:39:57.953 "driver_specific": { 00:39:57.953 "ftl": { 00:39:57.953 "base_bdev": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:57.953 "cache": "nvc0n1p0" 00:39:57.953 } 00:39:57.953 } 00:39:57.953 } 00:39:57.953 ] 00:39:57.953 16:05:19 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:39:57.953 16:05:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:39:57.953 16:05:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:39:58.212 16:05:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:39:58.212 16:05:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:39:58.473 16:05:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:39:58.473 { 00:39:58.473 "name": "ftl0", 00:39:58.473 "aliases": [ 00:39:58.473 "00f76824-5a3a-4487-9ed4-1ffd3d9e229e" 00:39:58.473 ], 00:39:58.473 "product_name": "FTL disk", 00:39:58.473 "block_size": 4096, 00:39:58.473 "num_blocks": 23592960, 00:39:58.473 "uuid": "00f76824-5a3a-4487-9ed4-1ffd3d9e229e", 00:39:58.473 "assigned_rate_limits": { 00:39:58.473 "rw_ios_per_sec": 0, 00:39:58.473 "rw_mbytes_per_sec": 0, 00:39:58.473 "r_mbytes_per_sec": 0, 00:39:58.474 "w_mbytes_per_sec": 0 00:39:58.474 }, 00:39:58.474 "claimed": false, 00:39:58.474 "zoned": false, 00:39:58.474 "supported_io_types": { 00:39:58.474 "read": true, 00:39:58.474 "write": true, 00:39:58.474 "unmap": true, 00:39:58.474 "flush": true, 00:39:58.474 "reset": false, 00:39:58.474 "nvme_admin": false, 00:39:58.474 "nvme_io": false, 00:39:58.474 "nvme_io_md": false, 00:39:58.474 "write_zeroes": true, 00:39:58.474 "zcopy": false, 00:39:58.474 "get_zone_info": false, 00:39:58.474 "zone_management": false, 00:39:58.474 "zone_append": false, 00:39:58.474 "compare": false, 00:39:58.474 "compare_and_write": false, 00:39:58.474 "abort": false, 00:39:58.474 "seek_hole": false, 00:39:58.474 "seek_data": false, 00:39:58.474 "copy": false, 00:39:58.474 "nvme_iov_md": false 00:39:58.474 }, 00:39:58.474 "driver_specific": { 00:39:58.474 "ftl": { 00:39:58.474 "base_bdev": "d0e206e3-b179-42d1-b1c3-cff6b48bc50c", 00:39:58.474 "cache": "nvc0n1p0" 00:39:58.474 } 00:39:58.474 } 00:39:58.474 } 00:39:58.474 ]' 00:39:58.474 16:05:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:39:58.474 16:05:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:39:58.474 16:05:19 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:39:58.735 [2024-11-05 16:05:19.913666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.913707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:58.735 [2024-11-05 16:05:19.913722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:58.735 [2024-11-05 16:05:19.913745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.913795] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:58.735 [2024-11-05 16:05:19.916586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.916715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:58.735 [2024-11-05 16:05:19.916756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:39:58.735 [2024-11-05 16:05:19.916766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.917313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.917330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:58.735 [2024-11-05 16:05:19.917342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:39:58.735 [2024-11-05 16:05:19.917349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.921008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.921034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:58.735 [2024-11-05 16:05:19.921045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:39:58.735 [2024-11-05 16:05:19.921053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.928321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.928350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:58.735 [2024-11-05 16:05:19.928362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.220 ms 00:39:58.735 [2024-11-05 16:05:19.928369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.952752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.952783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:58.735 [2024-11-05 16:05:19.952799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.300 ms 00:39:58.735 [2024-11-05 16:05:19.952806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.968502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.968534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:58.735 [2024-11-05 16:05:19.968548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.632 ms 00:39:58.735 [2024-11-05 16:05:19.968558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.968795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.968808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:58.735 [2024-11-05 16:05:19.968818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:39:58.735 [2024-11-05 16:05:19.968826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:19.991763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:19.991793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:58.735 [2024-11-05 16:05:19.991806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:39:58.735 [2024-11-05 16:05:19.991813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:20.014537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:20.014569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:58.735 [2024-11-05 16:05:20.014584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.662 ms 00:39:58.735 [2024-11-05 16:05:20.014591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:20.037329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:20.037362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:58.735 [2024-11-05 16:05:20.037375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.677 ms 00:39:58.735 [2024-11-05 16:05:20.037382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:20.060063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.735 [2024-11-05 16:05:20.060096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:58.735 [2024-11-05 16:05:20.060110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.566 ms 00:39:58.735 [2024-11-05 16:05:20.060117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.735 [2024-11-05 16:05:20.060187] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:58.735 [2024-11-05 16:05:20.060202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:58.735 [2024-11-05 16:05:20.060557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.060996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:58.736 [2024-11-05 16:05:20.061154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:58.736 [2024-11-05 16:05:20.061165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:39:58.736 [2024-11-05 16:05:20.061173] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:58.736 [2024-11-05 16:05:20.061182] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:58.736 [2024-11-05 16:05:20.061189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:58.736 [2024-11-05 16:05:20.061198] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:58.736 [2024-11-05 16:05:20.061207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:58.736 [2024-11-05 16:05:20.061216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:58.736 [2024-11-05 16:05:20.061223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:58.736 [2024-11-05 16:05:20.061232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:58.736 [2024-11-05 16:05:20.061238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:58.736 [2024-11-05 16:05:20.061248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.736 [2024-11-05 16:05:20.061257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:58.736 [2024-11-05 16:05:20.061267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:39:58.736 [2024-11-05 16:05:20.061274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.736 [2024-11-05 16:05:20.074404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.736 [2024-11-05 16:05:20.074435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:58.736 [2024-11-05 16:05:20.074453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.082 ms 00:39:58.736 [2024-11-05 16:05:20.074462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.736 [2024-11-05 16:05:20.074882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:58.736 [2024-11-05 16:05:20.074894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:58.736 [2024-11-05 16:05:20.074915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:39:58.736 [2024-11-05 16:05:20.074922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.122098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.122135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:58.995 [2024-11-05 16:05:20.122148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.122157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.122281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.122292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:58.995 [2024-11-05 16:05:20.122312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.122320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.122393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.122403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:58.995 [2024-11-05 16:05:20.122417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.122425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.122463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.122470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:58.995 [2024-11-05 16:05:20.122480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.122488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.209799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.209842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:58.995 [2024-11-05 16:05:20.209854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.209862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.276193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.276395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:58.995 [2024-11-05 16:05:20.276414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.276423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.276525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.276535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:58.995 [2024-11-05 16:05:20.276563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.276574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.276638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.276647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:58.995 [2024-11-05 16:05:20.276656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.276663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.276810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.276821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:58.995 [2024-11-05 16:05:20.276831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.276840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.276900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.276910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:58.995 [2024-11-05 16:05:20.276920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.276927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.276989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.276998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:58.995 [2024-11-05 16:05:20.277009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.277017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.277086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:58.995 [2024-11-05 16:05:20.277097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:58.995 [2024-11-05 16:05:20.277108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:58.995 [2024-11-05 16:05:20.277115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:58.995 [2024-11-05 16:05:20.277303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 363.620 ms, result 0 00:39:58.995 true 00:39:58.995 16:05:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73455 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73455 ']' 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73455 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73455 00:39:58.995 killing process with pid 73455 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73455' 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73455 00:39:58.995 16:05:20 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73455 00:40:05.577 16:05:26 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:40:05.839 65536+0 records in 00:40:05.839 65536+0 records out 00:40:05.839 268435456 bytes (268 MB, 256 MiB) copied, 1.06552 s, 252 MB/s 00:40:05.839 16:05:27 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:05.839 [2024-11-05 16:05:27.146629] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:40:05.839 [2024-11-05 16:05:27.146749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73637 ] 00:40:06.100 [2024-11-05 16:05:27.302310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.100 [2024-11-05 16:05:27.416440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.674 [2024-11-05 16:05:27.738033] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:06.674 [2024-11-05 16:05:27.738097] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:06.674 [2024-11-05 16:05:27.898163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.898210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:06.674 [2024-11-05 16:05:27.898226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:06.674 [2024-11-05 16:05:27.898234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.901099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.901136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:06.674 [2024-11-05 16:05:27.901146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:40:06.674 [2024-11-05 16:05:27.901153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.901242] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:06.674 [2024-11-05 16:05:27.901946] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:06.674 [2024-11-05 16:05:27.901967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.901976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:06.674 [2024-11-05 16:05:27.901985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:40:06.674 [2024-11-05 16:05:27.901992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.903852] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:06.674 [2024-11-05 16:05:27.917540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.917582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:06.674 [2024-11-05 16:05:27.917594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.690 ms 00:40:06.674 [2024-11-05 16:05:27.917602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.917693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.917705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:06.674 [2024-11-05 16:05:27.917714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:40:06.674 [2024-11-05 16:05:27.917722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.924909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.925069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:06.674 [2024-11-05 16:05:27.925086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.118 ms 00:40:06.674 [2024-11-05 16:05:27.925094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.925189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.925199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:06.674 [2024-11-05 16:05:27.925209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:40:06.674 [2024-11-05 16:05:27.925216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.925244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.925255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:06.674 [2024-11-05 16:05:27.925263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:06.674 [2024-11-05 16:05:27.925270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.925293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:06.674 [2024-11-05 16:05:27.929053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.929083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:06.674 [2024-11-05 16:05:27.929092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:40:06.674 [2024-11-05 16:05:27.929100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.929159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.674 [2024-11-05 16:05:27.929169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:06.674 [2024-11-05 16:05:27.929179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:40:06.674 [2024-11-05 16:05:27.929186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.674 [2024-11-05 16:05:27.929204] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:06.674 [2024-11-05 16:05:27.929227] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:06.674 [2024-11-05 16:05:27.929264] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:06.674 [2024-11-05 16:05:27.929279] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:06.674 [2024-11-05 16:05:27.929385] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:06.674 [2024-11-05 16:05:27.929395] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:06.674 [2024-11-05 16:05:27.929406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:06.674 [2024-11-05 16:05:27.929416] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:06.674 [2024-11-05 16:05:27.929448] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:06.674 [2024-11-05 16:05:27.929456] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:06.674 [2024-11-05 16:05:27.929463] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:06.674 [2024-11-05 16:05:27.929471] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:06.675 [2024-11-05 16:05:27.929479] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:06.675 [2024-11-05 16:05:27.929487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.675 [2024-11-05 16:05:27.929495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:06.675 [2024-11-05 16:05:27.929503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:40:06.675 [2024-11-05 16:05:27.929510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.675 [2024-11-05 16:05:27.929597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.675 [2024-11-05 16:05:27.929606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:06.675 [2024-11-05 16:05:27.929616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:06.675 [2024-11-05 16:05:27.929622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.675 [2024-11-05 16:05:27.929721] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:06.675 [2024-11-05 16:05:27.929731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:06.675 [2024-11-05 16:05:27.929756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:06.675 [2024-11-05 16:05:27.929778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:06.675 [2024-11-05 16:05:27.929799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:06.675 [2024-11-05 16:05:27.929813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:06.675 [2024-11-05 16:05:27.929820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:06.675 [2024-11-05 16:05:27.929827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:06.675 [2024-11-05 16:05:27.929840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:06.675 [2024-11-05 16:05:27.929847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:06.675 [2024-11-05 16:05:27.929854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:06.675 [2024-11-05 16:05:27.929891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:06.675 [2024-11-05 16:05:27.929913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:06.675 [2024-11-05 16:05:27.929933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:06.675 [2024-11-05 16:05:27.929953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:06.675 [2024-11-05 16:05:27.929974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:06.675 [2024-11-05 16:05:27.929980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:06.675 [2024-11-05 16:05:27.929987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:06.675 [2024-11-05 16:05:27.929993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:06.675 [2024-11-05 16:05:27.930000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:06.675 [2024-11-05 16:05:27.930006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:06.675 [2024-11-05 16:05:27.930013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:06.675 [2024-11-05 16:05:27.930019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:06.675 [2024-11-05 16:05:27.930026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:06.675 [2024-11-05 16:05:27.930032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:06.675 [2024-11-05 16:05:27.930038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.930045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:06.675 [2024-11-05 16:05:27.930051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:06.675 [2024-11-05 16:05:27.930057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.930066] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:06.675 [2024-11-05 16:05:27.930075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:06.675 [2024-11-05 16:05:27.930082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:06.675 [2024-11-05 16:05:27.930091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:06.675 [2024-11-05 16:05:27.930098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:06.675 [2024-11-05 16:05:27.930106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:06.675 [2024-11-05 16:05:27.930113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:06.675 [2024-11-05 16:05:27.930120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:06.675 [2024-11-05 16:05:27.930127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:06.675 [2024-11-05 16:05:27.930134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:06.675 [2024-11-05 16:05:27.930143] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:06.675 [2024-11-05 16:05:27.930152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:06.675 [2024-11-05 16:05:27.930167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:06.675 [2024-11-05 16:05:27.930174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:06.675 [2024-11-05 16:05:27.930181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:06.675 [2024-11-05 16:05:27.930189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:06.675 [2024-11-05 16:05:27.930196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:06.675 [2024-11-05 16:05:27.930203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:06.675 [2024-11-05 16:05:27.930209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:06.675 [2024-11-05 16:05:27.930216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:06.675 [2024-11-05 16:05:27.930224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:06.675 [2024-11-05 16:05:27.930260] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:06.675 [2024-11-05 16:05:27.930268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:06.675 [2024-11-05 16:05:27.930284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:06.675 [2024-11-05 16:05:27.930290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:06.675 [2024-11-05 16:05:27.930307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:06.675 [2024-11-05 16:05:27.930317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.675 [2024-11-05 16:05:27.930325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:06.675 [2024-11-05 16:05:27.930336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:40:06.675 [2024-11-05 16:05:27.930343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.675 [2024-11-05 16:05:27.960395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.675 [2024-11-05 16:05:27.960557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:06.675 [2024-11-05 16:05:27.960574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.001 ms 00:40:06.675 [2024-11-05 16:05:27.960584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:27.960705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:27.960719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:06.676 [2024-11-05 16:05:27.960728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:40:06.676 [2024-11-05 16:05:27.960754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:28.004781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:28.004822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:06.676 [2024-11-05 16:05:28.004835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.003 ms 00:40:06.676 [2024-11-05 16:05:28.004847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:28.004948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:28.004961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:06.676 [2024-11-05 16:05:28.004970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:06.676 [2024-11-05 16:05:28.004978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:28.005430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:28.005459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:06.676 [2024-11-05 16:05:28.005469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:40:06.676 [2024-11-05 16:05:28.005482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:28.005624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:28.005634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:06.676 [2024-11-05 16:05:28.005643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:40:06.676 [2024-11-05 16:05:28.005651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:28.021109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:28.021142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:06.676 [2024-11-05 16:05:28.021153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.434 ms 00:40:06.676 [2024-11-05 16:05:28.021162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.676 [2024-11-05 16:05:28.034816] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:40:06.676 [2024-11-05 16:05:28.034851] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:06.676 [2024-11-05 16:05:28.034863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.676 [2024-11-05 16:05:28.034871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:06.676 [2024-11-05 16:05:28.034880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.593 ms 00:40:06.676 [2024-11-05 16:05:28.034887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.937 [2024-11-05 16:05:28.059379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.937 [2024-11-05 16:05:28.059414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:06.937 [2024-11-05 16:05:28.059432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:40:06.937 [2024-11-05 16:05:28.059440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.937 [2024-11-05 16:05:28.071595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.937 [2024-11-05 16:05:28.071627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:06.937 [2024-11-05 16:05:28.071636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.083 ms 00:40:06.937 [2024-11-05 16:05:28.071643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.937 [2024-11-05 16:05:28.083045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.937 [2024-11-05 16:05:28.083076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:06.937 [2024-11-05 16:05:28.083086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.340 ms 00:40:06.938 [2024-11-05 16:05:28.083093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.083697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.083716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:06.938 [2024-11-05 16:05:28.083726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:40:06.938 [2024-11-05 16:05:28.083753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.144062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.144103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:06.938 [2024-11-05 16:05:28.144115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.284 ms 00:40:06.938 [2024-11-05 16:05:28.144123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.155011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:06.938 [2024-11-05 16:05:28.172072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.172107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:06.938 [2024-11-05 16:05:28.172118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.876 ms 00:40:06.938 [2024-11-05 16:05:28.172126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.172205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.172219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:06.938 [2024-11-05 16:05:28.172228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:06.938 [2024-11-05 16:05:28.172236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.172290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.172300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:06.938 [2024-11-05 16:05:28.172308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:40:06.938 [2024-11-05 16:05:28.172316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.172343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.172351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:06.938 [2024-11-05 16:05:28.172362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:06.938 [2024-11-05 16:05:28.172370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.172407] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:06.938 [2024-11-05 16:05:28.172417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.172425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:06.938 [2024-11-05 16:05:28.172433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:06.938 [2024-11-05 16:05:28.172441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.195991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.196029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:06.938 [2024-11-05 16:05:28.196040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:40:06.938 [2024-11-05 16:05:28.196048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.196142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.938 [2024-11-05 16:05:28.196154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:06.938 [2024-11-05 16:05:28.196163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:40:06.938 [2024-11-05 16:05:28.196171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.938 [2024-11-05 16:05:28.197168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:06.938 [2024-11-05 16:05:28.200151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.695 ms, result 0 00:40:06.938 [2024-11-05 16:05:28.201318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:06.938 [2024-11-05 16:05:28.214172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:07.883  [2024-11-05T16:05:30.625Z] Copying: 20/256 [MB] (20 MBps) [2024-11-05T16:05:31.567Z] Copying: 41/256 [MB] (20 MBps) [2024-11-05T16:05:32.507Z] Copying: 59/256 [MB] (17 MBps) [2024-11-05T16:05:33.443Z] Copying: 74/256 [MB] (14 MBps) [2024-11-05T16:05:34.383Z] Copying: 96/256 [MB] (22 MBps) [2024-11-05T16:05:35.327Z] Copying: 117/256 [MB] (20 MBps) [2024-11-05T16:05:36.273Z] Copying: 134/256 [MB] (17 MBps) [2024-11-05T16:05:37.219Z] Copying: 156/256 [MB] (21 MBps) [2024-11-05T16:05:38.607Z] Copying: 167/256 [MB] (11 MBps) [2024-11-05T16:05:39.553Z] Copying: 181476/262144 [kB] (10172 kBps) [2024-11-05T16:05:40.497Z] Copying: 187/256 [MB] (10 MBps) [2024-11-05T16:05:41.493Z] Copying: 202024/262144 [kB] (9980 kBps) [2024-11-05T16:05:42.433Z] Copying: 212040/262144 [kB] (10016 kBps) [2024-11-05T16:05:43.375Z] Copying: 230/256 [MB] (23 MBps) [2024-11-05T16:05:43.945Z] Copying: 249/256 [MB] (18 MBps) [2024-11-05T16:05:43.946Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-05 16:05:43.853445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:22.584 [2024-11-05 16:05:43.863753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.863959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:22.584 [2024-11-05 16:05:43.863985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:22.584 [2024-11-05 16:05:43.863996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.864028] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:22.584 [2024-11-05 16:05:43.867049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.867223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:22.584 [2024-11-05 16:05:43.867243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:40:22.584 [2024-11-05 16:05:43.867251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.870415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.870580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:22.584 [2024-11-05 16:05:43.870600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:40:22.584 [2024-11-05 16:05:43.870609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.880411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.880460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:22.584 [2024-11-05 16:05:43.880480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.778 ms 00:40:22.584 [2024-11-05 16:05:43.880488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.887461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.887502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:22.584 [2024-11-05 16:05:43.887514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.925 ms 00:40:22.584 [2024-11-05 16:05:43.887523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.913143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.913192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:22.584 [2024-11-05 16:05:43.913205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.554 ms 00:40:22.584 [2024-11-05 16:05:43.913213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.930016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.930063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:22.584 [2024-11-05 16:05:43.930084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.732 ms 00:40:22.584 [2024-11-05 16:05:43.930096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.584 [2024-11-05 16:05:43.930249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.584 [2024-11-05 16:05:43.930261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:22.584 [2024-11-05 16:05:43.930270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:40:22.584 [2024-11-05 16:05:43.930278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.846 [2024-11-05 16:05:43.956784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.846 [2024-11-05 16:05:43.956829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:22.846 [2024-11-05 16:05:43.956841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.488 ms 00:40:22.846 [2024-11-05 16:05:43.956849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.846 [2024-11-05 16:05:43.982389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.846 [2024-11-05 16:05:43.982433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:22.846 [2024-11-05 16:05:43.982445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.474 ms 00:40:22.846 [2024-11-05 16:05:43.982453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.846 [2024-11-05 16:05:44.007183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.846 [2024-11-05 16:05:44.007231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:22.846 [2024-11-05 16:05:44.007244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.680 ms 00:40:22.846 [2024-11-05 16:05:44.007251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.846 [2024-11-05 16:05:44.031867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.846 [2024-11-05 16:05:44.031911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:22.846 [2024-11-05 16:05:44.031921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.533 ms 00:40:22.846 [2024-11-05 16:05:44.031929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.846 [2024-11-05 16:05:44.031977] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:22.846 [2024-11-05 16:05:44.032000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:22.846 [2024-11-05 16:05:44.032186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:22.847 [2024-11-05 16:05:44.032789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:22.847 [2024-11-05 16:05:44.032797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:40:22.847 [2024-11-05 16:05:44.032805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:22.847 [2024-11-05 16:05:44.032813] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:22.847 [2024-11-05 16:05:44.032821] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:22.847 [2024-11-05 16:05:44.032829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:22.847 [2024-11-05 16:05:44.032837] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:22.847 [2024-11-05 16:05:44.032845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:22.847 [2024-11-05 16:05:44.032852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:22.847 [2024-11-05 16:05:44.032858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:22.847 [2024-11-05 16:05:44.032865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:22.847 [2024-11-05 16:05:44.032880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.847 [2024-11-05 16:05:44.032898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:22.847 [2024-11-05 16:05:44.032910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:40:22.847 [2024-11-05 16:05:44.032918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.847 [2024-11-05 16:05:44.046577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.848 [2024-11-05 16:05:44.046810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:22.848 [2024-11-05 16:05:44.046829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.624 ms 00:40:22.848 [2024-11-05 16:05:44.046838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.848 [2024-11-05 16:05:44.047244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.848 [2024-11-05 16:05:44.047263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:22.848 [2024-11-05 16:05:44.047273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:40:22.848 [2024-11-05 16:05:44.047280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.848 [2024-11-05 16:05:44.086196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:22.848 [2024-11-05 16:05:44.086381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:22.848 [2024-11-05 16:05:44.086401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:22.848 [2024-11-05 16:05:44.086410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.848 [2024-11-05 16:05:44.086494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:22.848 [2024-11-05 16:05:44.086507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:22.848 [2024-11-05 16:05:44.086516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:22.848 [2024-11-05 16:05:44.086523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.848 [2024-11-05 16:05:44.086576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:22.848 [2024-11-05 16:05:44.086586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:22.848 [2024-11-05 16:05:44.086594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:22.848 [2024-11-05 16:05:44.086602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.848 [2024-11-05 16:05:44.086619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:22.848 [2024-11-05 16:05:44.086628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:22.848 [2024-11-05 16:05:44.086639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:22.848 [2024-11-05 16:05:44.086647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.848 [2024-11-05 16:05:44.170310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:22.848 [2024-11-05 16:05:44.170373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:22.848 [2024-11-05 16:05:44.170388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:22.848 [2024-11-05 16:05:44.170397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.108 [2024-11-05 16:05:44.240592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.240655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:23.109 [2024-11-05 16:05:44.240675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.240684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.240774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.240786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:23.109 [2024-11-05 16:05:44.240796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.240805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.240838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.240847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:23.109 [2024-11-05 16:05:44.240855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.240867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.240990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.241003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:23.109 [2024-11-05 16:05:44.241012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.241020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.241053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.241065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:23.109 [2024-11-05 16:05:44.241074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.241082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.241129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.241140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:23.109 [2024-11-05 16:05:44.241150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.241158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.241207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:23.109 [2024-11-05 16:05:44.241219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:23.109 [2024-11-05 16:05:44.241227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:23.109 [2024-11-05 16:05:44.241239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:23.109 [2024-11-05 16:05:44.241406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.650 ms, result 0 00:40:24.049 00:40:24.049 00:40:24.049 16:05:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73828 00:40:24.049 16:05:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73828 00:40:24.049 16:05:45 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73828 ']' 00:40:24.049 16:05:45 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.049 16:05:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:24.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.049 16:05:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.049 16:05:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:40:24.049 16:05:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:24.049 16:05:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:40:24.049 [2024-11-05 16:05:45.338053] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:40:24.049 [2024-11-05 16:05:45.338219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73828 ] 00:40:24.309 [2024-11-05 16:05:45.505896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.309 [2024-11-05 16:05:45.627005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.249 16:05:46 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:25.249 16:05:46 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:40:25.249 16:05:46 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:40:25.249 [2024-11-05 16:05:46.532042] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:25.249 [2024-11-05 16:05:46.532119] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:25.509 [2024-11-05 16:05:46.711453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.509 [2024-11-05 16:05:46.711517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:25.510 [2024-11-05 16:05:46.711535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:25.510 [2024-11-05 16:05:46.711544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.717975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.718026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:25.510 [2024-11-05 16:05:46.718040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.407 ms 00:40:25.510 [2024-11-05 16:05:46.718049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.718189] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:25.510 [2024-11-05 16:05:46.719094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:25.510 [2024-11-05 16:05:46.719192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.719202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:25.510 [2024-11-05 16:05:46.719214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:40:25.510 [2024-11-05 16:05:46.719222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.721025] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:25.510 [2024-11-05 16:05:46.735553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.735610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:25.510 [2024-11-05 16:05:46.735625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.536 ms 00:40:25.510 [2024-11-05 16:05:46.735636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.735766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.735781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:25.510 [2024-11-05 16:05:46.735791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:40:25.510 [2024-11-05 16:05:46.735801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.745108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.745165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:25.510 [2024-11-05 16:05:46.745175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.249 ms 00:40:25.510 [2024-11-05 16:05:46.745186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.745307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.745320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:25.510 [2024-11-05 16:05:46.745329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:40:25.510 [2024-11-05 16:05:46.745339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.745374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.745384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:25.510 [2024-11-05 16:05:46.745392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:25.510 [2024-11-05 16:05:46.745402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.745426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:25.510 [2024-11-05 16:05:46.749629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.749840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:25.510 [2024-11-05 16:05:46.749866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:40:25.510 [2024-11-05 16:05:46.749875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.749964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.749974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:25.510 [2024-11-05 16:05:46.749985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:25.510 [2024-11-05 16:05:46.749996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.750021] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:25.510 [2024-11-05 16:05:46.750045] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:25.510 [2024-11-05 16:05:46.750091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:25.510 [2024-11-05 16:05:46.750107] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:25.510 [2024-11-05 16:05:46.750218] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:25.510 [2024-11-05 16:05:46.750229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:25.510 [2024-11-05 16:05:46.750245] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:25.510 [2024-11-05 16:05:46.750258] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:25.510 [2024-11-05 16:05:46.750336] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:25.510 [2024-11-05 16:05:46.750344] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:25.510 [2024-11-05 16:05:46.750358] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:25.510 [2024-11-05 16:05:46.750367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.750377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:25.510 [2024-11-05 16:05:46.750385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:40:25.510 [2024-11-05 16:05:46.750395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.750487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.510 [2024-11-05 16:05:46.750499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:25.510 [2024-11-05 16:05:46.750506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:40:25.510 [2024-11-05 16:05:46.750515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.510 [2024-11-05 16:05:46.750620] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:25.510 [2024-11-05 16:05:46.750641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:25.510 [2024-11-05 16:05:46.750650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:25.510 [2024-11-05 16:05:46.750676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:25.510 [2024-11-05 16:05:46.750703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:25.510 [2024-11-05 16:05:46.750720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:25.510 [2024-11-05 16:05:46.750729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:25.510 [2024-11-05 16:05:46.750757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:25.510 [2024-11-05 16:05:46.750766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:25.510 [2024-11-05 16:05:46.750775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:25.510 [2024-11-05 16:05:46.750785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:25.510 [2024-11-05 16:05:46.750801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:25.510 [2024-11-05 16:05:46.750832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:25.510 [2024-11-05 16:05:46.750858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:25.510 [2024-11-05 16:05:46.750881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:25.510 [2024-11-05 16:05:46.750890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:25.510 [2024-11-05 16:05:46.750904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:25.511 [2024-11-05 16:05:46.750913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:25.511 [2024-11-05 16:05:46.750920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:25.511 [2024-11-05 16:05:46.750931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:25.511 [2024-11-05 16:05:46.750941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:25.511 [2024-11-05 16:05:46.750950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:25.511 [2024-11-05 16:05:46.750958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:25.511 [2024-11-05 16:05:46.750966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:25.511 [2024-11-05 16:05:46.750972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:25.511 [2024-11-05 16:05:46.750981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:25.511 [2024-11-05 16:05:46.750988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:25.511 [2024-11-05 16:05:46.750998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:25.511 [2024-11-05 16:05:46.751006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:25.511 [2024-11-05 16:05:46.751014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:25.511 [2024-11-05 16:05:46.751021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:25.511 [2024-11-05 16:05:46.751029] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:25.511 [2024-11-05 16:05:46.751037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:25.511 [2024-11-05 16:05:46.751049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:25.511 [2024-11-05 16:05:46.751059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:25.511 [2024-11-05 16:05:46.751069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:25.511 [2024-11-05 16:05:46.751077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:25.511 [2024-11-05 16:05:46.751085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:25.511 [2024-11-05 16:05:46.751092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:25.511 [2024-11-05 16:05:46.751101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:25.511 [2024-11-05 16:05:46.751108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:25.511 [2024-11-05 16:05:46.751119] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:25.511 [2024-11-05 16:05:46.751129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:25.511 [2024-11-05 16:05:46.751150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:25.511 [2024-11-05 16:05:46.751159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:25.511 [2024-11-05 16:05:46.751167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:25.511 [2024-11-05 16:05:46.751176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:25.511 [2024-11-05 16:05:46.751183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:25.511 [2024-11-05 16:05:46.751193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:25.511 [2024-11-05 16:05:46.751200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:25.511 [2024-11-05 16:05:46.751208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:25.511 [2024-11-05 16:05:46.751217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:25.511 [2024-11-05 16:05:46.751261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:25.511 [2024-11-05 16:05:46.751269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:25.511 [2024-11-05 16:05:46.751290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:25.511 [2024-11-05 16:05:46.751307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:25.511 [2024-11-05 16:05:46.751314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:25.511 [2024-11-05 16:05:46.751323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.751331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:25.511 [2024-11-05 16:05:46.751342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:40:25.511 [2024-11-05 16:05:46.751350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.784482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.784538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:25.511 [2024-11-05 16:05:46.784555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.066 ms 00:40:25.511 [2024-11-05 16:05:46.784565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.784707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.784719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:25.511 [2024-11-05 16:05:46.784730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:40:25.511 [2024-11-05 16:05:46.784770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.820163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.820355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:25.511 [2024-11-05 16:05:46.820384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.363 ms 00:40:25.511 [2024-11-05 16:05:46.820393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.820488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.820498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:25.511 [2024-11-05 16:05:46.820510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:25.511 [2024-11-05 16:05:46.820518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.821078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.821099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:25.511 [2024-11-05 16:05:46.821115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:40:25.511 [2024-11-05 16:05:46.821123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.821277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.821286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:25.511 [2024-11-05 16:05:46.821298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:40:25.511 [2024-11-05 16:05:46.821305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.839350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.839517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:25.511 [2024-11-05 16:05:46.839538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.018 ms 00:40:25.511 [2024-11-05 16:05:46.839546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.511 [2024-11-05 16:05:46.853868] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:40:25.511 [2024-11-05 16:05:46.854046] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:25.511 [2024-11-05 16:05:46.854069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.511 [2024-11-05 16:05:46.854078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:25.511 [2024-11-05 16:05:46.854090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.401 ms 00:40:25.511 [2024-11-05 16:05:46.854098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:46.880767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:46.880955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:25.772 [2024-11-05 16:05:46.880983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.526 ms 00:40:25.772 [2024-11-05 16:05:46.880993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:46.894381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:46.894431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:25.772 [2024-11-05 16:05:46.894449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.285 ms 00:40:25.772 [2024-11-05 16:05:46.894457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:46.907349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:46.907393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:25.772 [2024-11-05 16:05:46.907409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.792 ms 00:40:25.772 [2024-11-05 16:05:46.907417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:46.908122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:46.908148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:25.772 [2024-11-05 16:05:46.908161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:40:25.772 [2024-11-05 16:05:46.908170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:46.998081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:46.998365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:25.772 [2024-11-05 16:05:46.998398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.880 ms 00:40:25.772 [2024-11-05 16:05:46.998409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.010014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:25.772 [2024-11-05 16:05:47.029624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.029851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:25.772 [2024-11-05 16:05:47.029876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.117 ms 00:40:25.772 [2024-11-05 16:05:47.029887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.029984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.029999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:25.772 [2024-11-05 16:05:47.030009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:40:25.772 [2024-11-05 16:05:47.030019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.030078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.030090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:25.772 [2024-11-05 16:05:47.030099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:40:25.772 [2024-11-05 16:05:47.030109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.030138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.030149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:25.772 [2024-11-05 16:05:47.030158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:25.772 [2024-11-05 16:05:47.030170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.030205] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:25.772 [2024-11-05 16:05:47.030221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.030229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:25.772 [2024-11-05 16:05:47.030243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:25.772 [2024-11-05 16:05:47.030250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.056123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.056173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:25.772 [2024-11-05 16:05:47.056190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.840 ms 00:40:25.772 [2024-11-05 16:05:47.056199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.056323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:25.772 [2024-11-05 16:05:47.056335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:25.772 [2024-11-05 16:05:47.056347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:40:25.772 [2024-11-05 16:05:47.056358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:25.772 [2024-11-05 16:05:47.057481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:25.772 [2024-11-05 16:05:47.061058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.700 ms, result 0 00:40:25.772 [2024-11-05 16:05:47.062938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:25.772 Some configs were skipped because the RPC state that can call them passed over. 00:40:25.772 16:05:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:40:26.032 [2024-11-05 16:05:47.307839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.032 [2024-11-05 16:05:47.308049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:26.032 [2024-11-05 16:05:47.308119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:40:26.032 [2024-11-05 16:05:47.308147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.032 [2024-11-05 16:05:47.308205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.650 ms, result 0 00:40:26.032 true 00:40:26.032 16:05:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:40:26.292 [2024-11-05 16:05:47.515539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.292 [2024-11-05 16:05:47.515705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:26.292 [2024-11-05 16:05:47.515793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:40:26.292 [2024-11-05 16:05:47.515818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.292 [2024-11-05 16:05:47.515878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.051 ms, result 0 00:40:26.292 true 00:40:26.292 16:05:47 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73828 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73828 ']' 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73828 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73828 00:40:26.292 killing process with pid 73828 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73828' 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73828 00:40:26.292 16:05:47 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73828 00:40:27.230 [2024-11-05 16:05:48.246027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.230 [2024-11-05 16:05:48.246074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:27.230 [2024-11-05 16:05:48.246084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:27.230 [2024-11-05 16:05:48.246092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.230 [2024-11-05 16:05:48.246110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:27.230 [2024-11-05 16:05:48.248182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.230 [2024-11-05 16:05:48.248207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:27.230 [2024-11-05 16:05:48.248219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:40:27.230 [2024-11-05 16:05:48.248226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.248442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.248453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:27.231 [2024-11-05 16:05:48.248462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:40:27.231 [2024-11-05 16:05:48.248467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.251663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.251686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:27.231 [2024-11-05 16:05:48.251697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:40:27.231 [2024-11-05 16:05:48.251703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.256910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.257026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:27.231 [2024-11-05 16:05:48.257041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.178 ms 00:40:27.231 [2024-11-05 16:05:48.257047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.264569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.264658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:27.231 [2024-11-05 16:05:48.264710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.475 ms 00:40:27.231 [2024-11-05 16:05:48.264732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.271064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.271157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:27.231 [2024-11-05 16:05:48.271209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.286 ms 00:40:27.231 [2024-11-05 16:05:48.271227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.271339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.271654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:27.231 [2024-11-05 16:05:48.271742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:40:27.231 [2024-11-05 16:05:48.271798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.279650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.279745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:27.231 [2024-11-05 16:05:48.279790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.810 ms 00:40:27.231 [2024-11-05 16:05:48.279807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.287605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.287688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:27.231 [2024-11-05 16:05:48.287731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.760 ms 00:40:27.231 [2024-11-05 16:05:48.287757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.294677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.294771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:27.231 [2024-11-05 16:05:48.294819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:40:27.231 [2024-11-05 16:05:48.294859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.301784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.231 [2024-11-05 16:05:48.301867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:27.231 [2024-11-05 16:05:48.301907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.821 ms 00:40:27.231 [2024-11-05 16:05:48.301924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.231 [2024-11-05 16:05:48.301958] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:27.231 [2024-11-05 16:05:48.301979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.302990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:27.231 [2024-11-05 16:05:48.303840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.303866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.303887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.303910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.303932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.303954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.304995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.305018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.305040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.305064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:27.232 [2024-11-05 16:05:48.305151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:27.232 [2024-11-05 16:05:48.305171] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:40:27.232 [2024-11-05 16:05:48.305198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:27.232 [2024-11-05 16:05:48.305215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:27.232 [2024-11-05 16:05:48.305230] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:27.232 [2024-11-05 16:05:48.305245] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:27.232 [2024-11-05 16:05:48.305284] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:27.232 [2024-11-05 16:05:48.305303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:27.232 [2024-11-05 16:05:48.305318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:27.232 [2024-11-05 16:05:48.305333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:27.232 [2024-11-05 16:05:48.305346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:27.232 [2024-11-05 16:05:48.305363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.232 [2024-11-05 16:05:48.305377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:27.232 [2024-11-05 16:05:48.305394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:40:27.232 [2024-11-05 16:05:48.305408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.315012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.232 [2024-11-05 16:05:48.315093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:27.232 [2024-11-05 16:05:48.315136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.537 ms 00:40:27.232 [2024-11-05 16:05:48.315152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.315448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:27.232 [2024-11-05 16:05:48.315507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:27.232 [2024-11-05 16:05:48.315545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:40:27.232 [2024-11-05 16:05:48.315562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.350258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.350364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:27.232 [2024-11-05 16:05:48.350377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.232 [2024-11-05 16:05:48.350384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.350461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.350469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:27.232 [2024-11-05 16:05:48.350477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.232 [2024-11-05 16:05:48.350484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.350520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.350528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:27.232 [2024-11-05 16:05:48.350537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.232 [2024-11-05 16:05:48.350543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.350556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.350562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:27.232 [2024-11-05 16:05:48.350569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.232 [2024-11-05 16:05:48.350575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.409812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.409843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:27.232 [2024-11-05 16:05:48.409853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.232 [2024-11-05 16:05:48.409860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.458113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.458146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:27.232 [2024-11-05 16:05:48.458155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.232 [2024-11-05 16:05:48.458163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.232 [2024-11-05 16:05:48.458221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.232 [2024-11-05 16:05:48.458229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:27.232 [2024-11-05 16:05:48.458238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.233 [2024-11-05 16:05:48.458244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.233 [2024-11-05 16:05:48.458267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.233 [2024-11-05 16:05:48.458273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:27.233 [2024-11-05 16:05:48.458281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.233 [2024-11-05 16:05:48.458286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.233 [2024-11-05 16:05:48.458374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.233 [2024-11-05 16:05:48.458382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:27.233 [2024-11-05 16:05:48.458389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.233 [2024-11-05 16:05:48.458395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.233 [2024-11-05 16:05:48.458420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.233 [2024-11-05 16:05:48.458427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:27.233 [2024-11-05 16:05:48.458434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.233 [2024-11-05 16:05:48.458440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.233 [2024-11-05 16:05:48.458470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.233 [2024-11-05 16:05:48.458479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:27.233 [2024-11-05 16:05:48.458488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.233 [2024-11-05 16:05:48.458494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.233 [2024-11-05 16:05:48.458528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:27.233 [2024-11-05 16:05:48.458536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:27.233 [2024-11-05 16:05:48.458543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:27.233 [2024-11-05 16:05:48.458549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:27.233 [2024-11-05 16:05:48.458654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.610 ms, result 0 00:40:27.799 16:05:48 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:40:27.799 16:05:48 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:27.799 [2024-11-05 16:05:49.025348] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:40:27.799 [2024-11-05 16:05:49.025959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73881 ] 00:40:28.057 [2024-11-05 16:05:49.182667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.057 [2024-11-05 16:05:49.258717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.316 [2024-11-05 16:05:49.464363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:28.316 [2024-11-05 16:05:49.464411] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:28.316 [2024-11-05 16:05:49.612260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.612293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:28.316 [2024-11-05 16:05:49.612304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:28.316 [2024-11-05 16:05:49.612310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.614433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.614575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:28.316 [2024-11-05 16:05:49.614588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:40:28.316 [2024-11-05 16:05:49.614595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.614651] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:28.316 [2024-11-05 16:05:49.615210] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:28.316 [2024-11-05 16:05:49.615223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.615230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:28.316 [2024-11-05 16:05:49.615237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:40:28.316 [2024-11-05 16:05:49.615243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.616220] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:28.316 [2024-11-05 16:05:49.626228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.626257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:28.316 [2024-11-05 16:05:49.626266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.009 ms 00:40:28.316 [2024-11-05 16:05:49.626272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.626351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.626360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:28.316 [2024-11-05 16:05:49.626367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:28.316 [2024-11-05 16:05:49.626373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.630783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.630806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:28.316 [2024-11-05 16:05:49.630813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:40:28.316 [2024-11-05 16:05:49.630819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.630888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.630895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:28.316 [2024-11-05 16:05:49.630901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:40:28.316 [2024-11-05 16:05:49.630907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.630924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.630932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:28.316 [2024-11-05 16:05:49.630938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:28.316 [2024-11-05 16:05:49.630944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.316 [2024-11-05 16:05:49.630961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:28.316 [2024-11-05 16:05:49.633672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.316 [2024-11-05 16:05:49.633809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:28.316 [2024-11-05 16:05:49.633823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:40:28.316 [2024-11-05 16:05:49.633829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.317 [2024-11-05 16:05:49.633858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.317 [2024-11-05 16:05:49.633865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:28.317 [2024-11-05 16:05:49.633871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:28.317 [2024-11-05 16:05:49.633877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.317 [2024-11-05 16:05:49.633890] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:28.317 [2024-11-05 16:05:49.633907] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:28.317 [2024-11-05 16:05:49.633933] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:28.317 [2024-11-05 16:05:49.633945] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:28.317 [2024-11-05 16:05:49.634023] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:28.317 [2024-11-05 16:05:49.634032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:28.317 [2024-11-05 16:05:49.634040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:28.317 [2024-11-05 16:05:49.634048] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634063] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:28.317 [2024-11-05 16:05:49.634070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:28.317 [2024-11-05 16:05:49.634075] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:28.317 [2024-11-05 16:05:49.634082] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:28.317 [2024-11-05 16:05:49.634088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.317 [2024-11-05 16:05:49.634094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:28.317 [2024-11-05 16:05:49.634100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:40:28.317 [2024-11-05 16:05:49.634106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.317 [2024-11-05 16:05:49.634173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.317 [2024-11-05 16:05:49.634180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:28.317 [2024-11-05 16:05:49.634188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:40:28.317 [2024-11-05 16:05:49.634194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.317 [2024-11-05 16:05:49.634266] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:28.317 [2024-11-05 16:05:49.634274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:28.317 [2024-11-05 16:05:49.634281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:28.317 [2024-11-05 16:05:49.634306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:28.317 [2024-11-05 16:05:49.634325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:28.317 [2024-11-05 16:05:49.634336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:28.317 [2024-11-05 16:05:49.634342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:28.317 [2024-11-05 16:05:49.634348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:28.317 [2024-11-05 16:05:49.634358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:28.317 [2024-11-05 16:05:49.634364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:28.317 [2024-11-05 16:05:49.634369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:28.317 [2024-11-05 16:05:49.634380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:28.317 [2024-11-05 16:05:49.634395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:28.317 [2024-11-05 16:05:49.634411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:28.317 [2024-11-05 16:05:49.634426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:28.317 [2024-11-05 16:05:49.634441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:28.317 [2024-11-05 16:05:49.634457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:28.317 [2024-11-05 16:05:49.634467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:28.317 [2024-11-05 16:05:49.634471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:28.317 [2024-11-05 16:05:49.634476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:28.317 [2024-11-05 16:05:49.634481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:28.317 [2024-11-05 16:05:49.634486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:28.317 [2024-11-05 16:05:49.634491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:28.317 [2024-11-05 16:05:49.634501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:28.317 [2024-11-05 16:05:49.634506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634514] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:28.317 [2024-11-05 16:05:49.634520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:28.317 [2024-11-05 16:05:49.634525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:28.317 [2024-11-05 16:05:49.634539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:28.317 [2024-11-05 16:05:49.634544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:28.317 [2024-11-05 16:05:49.634549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:28.317 [2024-11-05 16:05:49.634554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:28.317 [2024-11-05 16:05:49.634559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:28.317 [2024-11-05 16:05:49.634564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:28.317 [2024-11-05 16:05:49.634570] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:28.317 [2024-11-05 16:05:49.634578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:28.317 [2024-11-05 16:05:49.634584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:28.317 [2024-11-05 16:05:49.634589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:28.317 [2024-11-05 16:05:49.634595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:28.317 [2024-11-05 16:05:49.634600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:28.317 [2024-11-05 16:05:49.634605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:28.317 [2024-11-05 16:05:49.634612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:28.317 [2024-11-05 16:05:49.634618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:28.317 [2024-11-05 16:05:49.634623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:28.317 [2024-11-05 16:05:49.634629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:28.317 [2024-11-05 16:05:49.634634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:28.317 [2024-11-05 16:05:49.634639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:28.317 [2024-11-05 16:05:49.634644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:28.317 [2024-11-05 16:05:49.634650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:28.317 [2024-11-05 16:05:49.634656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:28.317 [2024-11-05 16:05:49.634661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:28.318 [2024-11-05 16:05:49.634668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:28.318 [2024-11-05 16:05:49.634674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:28.318 [2024-11-05 16:05:49.634680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:28.318 [2024-11-05 16:05:49.634685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:28.318 [2024-11-05 16:05:49.634692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:28.318 [2024-11-05 16:05:49.634698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.318 [2024-11-05 16:05:49.634704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:28.318 [2024-11-05 16:05:49.634711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:40:28.318 [2024-11-05 16:05:49.634717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.318 [2024-11-05 16:05:49.655473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.318 [2024-11-05 16:05:49.655500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:28.318 [2024-11-05 16:05:49.655508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.707 ms 00:40:28.318 [2024-11-05 16:05:49.655514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.318 [2024-11-05 16:05:49.655605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.318 [2024-11-05 16:05:49.655615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:28.318 [2024-11-05 16:05:49.655622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:40:28.318 [2024-11-05 16:05:49.655628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.696075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.696199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:28.578 [2024-11-05 16:05:49.696213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.431 ms 00:40:28.578 [2024-11-05 16:05:49.696223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.696283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.696292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:28.578 [2024-11-05 16:05:49.696299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:28.578 [2024-11-05 16:05:49.696305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.696587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.696600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:28.578 [2024-11-05 16:05:49.696608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:40:28.578 [2024-11-05 16:05:49.696614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.696724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.696732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:28.578 [2024-11-05 16:05:49.696758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:40:28.578 [2024-11-05 16:05:49.696763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.707555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.707656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:28.578 [2024-11-05 16:05:49.707669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.776 ms 00:40:28.578 [2024-11-05 16:05:49.707675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.717478] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:40:28.578 [2024-11-05 16:05:49.717507] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:28.578 [2024-11-05 16:05:49.717516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.717524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:28.578 [2024-11-05 16:05:49.717531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.734 ms 00:40:28.578 [2024-11-05 16:05:49.717536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.736006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.736045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:28.578 [2024-11-05 16:05:49.736054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.421 ms 00:40:28.578 [2024-11-05 16:05:49.736060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.745001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.745109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:28.578 [2024-11-05 16:05:49.745122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.879 ms 00:40:28.578 [2024-11-05 16:05:49.745128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.754111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.754136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:28.578 [2024-11-05 16:05:49.754143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.941 ms 00:40:28.578 [2024-11-05 16:05:49.754149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.754605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.754617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:28.578 [2024-11-05 16:05:49.754624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:40:28.578 [2024-11-05 16:05:49.754630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.798227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.798270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:28.578 [2024-11-05 16:05:49.798280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.580 ms 00:40:28.578 [2024-11-05 16:05:49.798287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.806092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:28.578 [2024-11-05 16:05:49.817540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.817567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:28.578 [2024-11-05 16:05:49.817577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.185 ms 00:40:28.578 [2024-11-05 16:05:49.817584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.578 [2024-11-05 16:05:49.817655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.578 [2024-11-05 16:05:49.817664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:28.578 [2024-11-05 16:05:49.817671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:28.578 [2024-11-05 16:05:49.817677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.579 [2024-11-05 16:05:49.817712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.579 [2024-11-05 16:05:49.817719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:28.579 [2024-11-05 16:05:49.817725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:40:28.579 [2024-11-05 16:05:49.817731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.579 [2024-11-05 16:05:49.817770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.579 [2024-11-05 16:05:49.817779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:28.579 [2024-11-05 16:05:49.817785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:28.579 [2024-11-05 16:05:49.817791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.579 [2024-11-05 16:05:49.817830] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:28.579 [2024-11-05 16:05:49.817838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.579 [2024-11-05 16:05:49.817844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:28.579 [2024-11-05 16:05:49.817851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:28.579 [2024-11-05 16:05:49.817858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.579 [2024-11-05 16:05:49.835744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.579 [2024-11-05 16:05:49.835771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:28.579 [2024-11-05 16:05:49.835779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.856 ms 00:40:28.579 [2024-11-05 16:05:49.835785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.579 [2024-11-05 16:05:49.835858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:28.579 [2024-11-05 16:05:49.835866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:28.579 [2024-11-05 16:05:49.835873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:40:28.579 [2024-11-05 16:05:49.835879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:28.579 [2024-11-05 16:05:49.836765] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:28.579 [2024-11-05 16:05:49.839190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.268 ms, result 0 00:40:28.579 [2024-11-05 16:05:49.839981] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:28.579 [2024-11-05 16:05:49.850694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:29.519  [2024-11-05T16:05:52.263Z] Copying: 22/256 [MB] (22 MBps) [2024-11-05T16:05:53.206Z] Copying: 41/256 [MB] (18 MBps) [2024-11-05T16:05:54.146Z] Copying: 60/256 [MB] (19 MBps) [2024-11-05T16:05:55.090Z] Copying: 80/256 [MB] (19 MBps) [2024-11-05T16:05:56.034Z] Copying: 96/256 [MB] (15 MBps) [2024-11-05T16:05:56.978Z] Copying: 115/256 [MB] (19 MBps) [2024-11-05T16:05:57.920Z] Copying: 139/256 [MB] (23 MBps) [2024-11-05T16:05:58.858Z] Copying: 158/256 [MB] (18 MBps) [2024-11-05T16:06:00.245Z] Copying: 177/256 [MB] (19 MBps) [2024-11-05T16:06:01.188Z] Copying: 198/256 [MB] (20 MBps) [2024-11-05T16:06:02.133Z] Copying: 213/256 [MB] (15 MBps) [2024-11-05T16:06:03.077Z] Copying: 229/256 [MB] (16 MBps) [2024-11-05T16:06:03.339Z] Copying: 250/256 [MB] (20 MBps) [2024-11-05T16:06:03.339Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-05 16:06:03.085533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:41.977 [2024-11-05 16:06:03.095797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.977 [2024-11-05 16:06:03.095846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:41.978 [2024-11-05 16:06:03.095863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:41.978 [2024-11-05 16:06:03.095879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.095903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:41.978 [2024-11-05 16:06:03.098915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.099113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:41.978 [2024-11-05 16:06:03.099136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:40:41.978 [2024-11-05 16:06:03.099145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.099419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.099431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:41.978 [2024-11-05 16:06:03.099442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:40:41.978 [2024-11-05 16:06:03.099450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.103180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.103214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:41.978 [2024-11-05 16:06:03.103223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:40:41.978 [2024-11-05 16:06:03.103232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.110102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.110278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:41.978 [2024-11-05 16:06:03.110310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:40:41.978 [2024-11-05 16:06:03.110318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.135262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.135308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:41.978 [2024-11-05 16:06:03.135322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.875 ms 00:40:41.978 [2024-11-05 16:06:03.135329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.150889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.150942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:41.978 [2024-11-05 16:06:03.150955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.510 ms 00:40:41.978 [2024-11-05 16:06:03.150967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.151117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.151130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:41.978 [2024-11-05 16:06:03.151139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:40:41.978 [2024-11-05 16:06:03.151147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.176945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.177124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:41.978 [2024-11-05 16:06:03.177144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.771 ms 00:40:41.978 [2024-11-05 16:06:03.177151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.202175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.202219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:41.978 [2024-11-05 16:06:03.202230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.940 ms 00:40:41.978 [2024-11-05 16:06:03.202237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.226417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.226463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:41.978 [2024-11-05 16:06:03.226474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.133 ms 00:40:41.978 [2024-11-05 16:06:03.226481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.250642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.978 [2024-11-05 16:06:03.250687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:41.978 [2024-11-05 16:06:03.250698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.069 ms 00:40:41.978 [2024-11-05 16:06:03.250706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.978 [2024-11-05 16:06:03.250770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:41.978 [2024-11-05 16:06:03.250786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.250996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:41.978 [2024-11-05 16:06:03.251180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:41.979 [2024-11-05 16:06:03.251582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:41.979 [2024-11-05 16:06:03.251590] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:40:41.979 [2024-11-05 16:06:03.251599] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:41.979 [2024-11-05 16:06:03.251606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:41.979 [2024-11-05 16:06:03.251614] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:41.979 [2024-11-05 16:06:03.251622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:41.979 [2024-11-05 16:06:03.251630] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:41.979 [2024-11-05 16:06:03.251638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:41.979 [2024-11-05 16:06:03.251646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:41.979 [2024-11-05 16:06:03.251653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:41.979 [2024-11-05 16:06:03.251659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:41.979 [2024-11-05 16:06:03.251667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.979 [2024-11-05 16:06:03.251684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:41.979 [2024-11-05 16:06:03.251694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:40:41.979 [2024-11-05 16:06:03.251702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.979 [2024-11-05 16:06:03.265463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.979 [2024-11-05 16:06:03.265506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:41.979 [2024-11-05 16:06:03.265517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.729 ms 00:40:41.979 [2024-11-05 16:06:03.265525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.979 [2024-11-05 16:06:03.265962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:41.979 [2024-11-05 16:06:03.265978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:41.979 [2024-11-05 16:06:03.265988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:40:41.979 [2024-11-05 16:06:03.265996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.979 [2024-11-05 16:06:03.304597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.979 [2024-11-05 16:06:03.304818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:41.979 [2024-11-05 16:06:03.304837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.979 [2024-11-05 16:06:03.304846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.979 [2024-11-05 16:06:03.304935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.979 [2024-11-05 16:06:03.304945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:41.979 [2024-11-05 16:06:03.304955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.979 [2024-11-05 16:06:03.304963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.979 [2024-11-05 16:06:03.305017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.979 [2024-11-05 16:06:03.305027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:41.979 [2024-11-05 16:06:03.305035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.979 [2024-11-05 16:06:03.305043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:41.979 [2024-11-05 16:06:03.305066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:41.979 [2024-11-05 16:06:03.305076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:41.979 [2024-11-05 16:06:03.305084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:41.979 [2024-11-05 16:06:03.305092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.388866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.388924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:42.241 [2024-11-05 16:06:03.388937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.388946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.457765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.457824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:42.241 [2024-11-05 16:06:03.457837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.457846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.457905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.457915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:42.241 [2024-11-05 16:06:03.457924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.457932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.457966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.457975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:42.241 [2024-11-05 16:06:03.457987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.457995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.458094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.458105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:42.241 [2024-11-05 16:06:03.458113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.458121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.458154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.458164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:42.241 [2024-11-05 16:06:03.458173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.458184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.458227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.458236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:42.241 [2024-11-05 16:06:03.458245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.458253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.458319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:42.241 [2024-11-05 16:06:03.458330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:42.241 [2024-11-05 16:06:03.458342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:42.241 [2024-11-05 16:06:03.458350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:42.241 [2024-11-05 16:06:03.458503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.693 ms, result 0 00:40:43.185 00:40:43.185 00:40:43.185 16:06:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:40:43.185 16:06:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:40:43.446 16:06:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:43.707 [2024-11-05 16:06:04.852578] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:40:43.708 [2024-11-05 16:06:04.852769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74052 ] 00:40:43.708 [2024-11-05 16:06:05.020535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.968 [2024-11-05 16:06:05.142550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.229 [2024-11-05 16:06:05.430983] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:44.229 [2024-11-05 16:06:05.431062] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:44.492 [2024-11-05 16:06:05.593187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.593424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:44.492 [2024-11-05 16:06:05.593450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:44.492 [2024-11-05 16:06:05.593460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.596467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.596640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:44.492 [2024-11-05 16:06:05.596661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:40:44.492 [2024-11-05 16:06:05.596669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.597442] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:44.492 [2024-11-05 16:06:05.598403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:44.492 [2024-11-05 16:06:05.598456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.598467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:44.492 [2024-11-05 16:06:05.598478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:40:44.492 [2024-11-05 16:06:05.598486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.600235] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:44.492 [2024-11-05 16:06:05.614557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.614609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:44.492 [2024-11-05 16:06:05.614623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.323 ms 00:40:44.492 [2024-11-05 16:06:05.614631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.614766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.614781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:44.492 [2024-11-05 16:06:05.614792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:40:44.492 [2024-11-05 16:06:05.614800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.622661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.622870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:44.492 [2024-11-05 16:06:05.622889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.814 ms 00:40:44.492 [2024-11-05 16:06:05.622899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.623011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.623023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:44.492 [2024-11-05 16:06:05.623032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:40:44.492 [2024-11-05 16:06:05.623041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.623068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.623079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:44.492 [2024-11-05 16:06:05.623088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:44.492 [2024-11-05 16:06:05.623096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.623119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:44.492 [2024-11-05 16:06:05.627148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.627184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:44.492 [2024-11-05 16:06:05.627195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:40:44.492 [2024-11-05 16:06:05.627203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.627276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.627286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:44.492 [2024-11-05 16:06:05.627296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:44.492 [2024-11-05 16:06:05.627303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.627323] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:44.492 [2024-11-05 16:06:05.627348] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:44.492 [2024-11-05 16:06:05.627386] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:44.492 [2024-11-05 16:06:05.627402] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:44.492 [2024-11-05 16:06:05.627508] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:44.492 [2024-11-05 16:06:05.627520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:44.492 [2024-11-05 16:06:05.627532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:44.492 [2024-11-05 16:06:05.627543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:44.492 [2024-11-05 16:06:05.627556] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:44.492 [2024-11-05 16:06:05.627565] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:44.492 [2024-11-05 16:06:05.627573] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:44.492 [2024-11-05 16:06:05.627581] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:44.492 [2024-11-05 16:06:05.627589] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:44.492 [2024-11-05 16:06:05.627597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.627605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:44.492 [2024-11-05 16:06:05.627614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:40:44.492 [2024-11-05 16:06:05.627621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.627710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.492 [2024-11-05 16:06:05.627719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:44.492 [2024-11-05 16:06:05.627730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:44.492 [2024-11-05 16:06:05.627759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.492 [2024-11-05 16:06:05.627859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:44.492 [2024-11-05 16:06:05.627870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:44.492 [2024-11-05 16:06:05.627879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:44.492 [2024-11-05 16:06:05.627889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.492 [2024-11-05 16:06:05.627897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:44.492 [2024-11-05 16:06:05.627904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:44.492 [2024-11-05 16:06:05.627911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:44.492 [2024-11-05 16:06:05.627918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:44.492 [2024-11-05 16:06:05.627926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:44.492 [2024-11-05 16:06:05.627933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:44.492 [2024-11-05 16:06:05.627940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:44.492 [2024-11-05 16:06:05.627947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:44.492 [2024-11-05 16:06:05.627954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:44.492 [2024-11-05 16:06:05.627969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:44.492 [2024-11-05 16:06:05.627979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:44.492 [2024-11-05 16:06:05.627986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.492 [2024-11-05 16:06:05.627993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:44.492 [2024-11-05 16:06:05.628001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:44.492 [2024-11-05 16:06:05.628007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.492 [2024-11-05 16:06:05.628014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:44.492 [2024-11-05 16:06:05.628020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:44.492 [2024-11-05 16:06:05.628027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.492 [2024-11-05 16:06:05.628033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:44.492 [2024-11-05 16:06:05.628040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:44.492 [2024-11-05 16:06:05.628046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.493 [2024-11-05 16:06:05.628054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:44.493 [2024-11-05 16:06:05.628060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:44.493 [2024-11-05 16:06:05.628066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.493 [2024-11-05 16:06:05.628072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:44.493 [2024-11-05 16:06:05.628079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:44.493 [2024-11-05 16:06:05.628085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:44.493 [2024-11-05 16:06:05.628092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:44.493 [2024-11-05 16:06:05.628099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:44.493 [2024-11-05 16:06:05.628106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:44.493 [2024-11-05 16:06:05.628112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:44.493 [2024-11-05 16:06:05.628119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:44.493 [2024-11-05 16:06:05.628126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:44.493 [2024-11-05 16:06:05.628133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:44.493 [2024-11-05 16:06:05.628140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:44.493 [2024-11-05 16:06:05.628147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.493 [2024-11-05 16:06:05.628154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:44.493 [2024-11-05 16:06:05.628161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:44.493 [2024-11-05 16:06:05.628167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.493 [2024-11-05 16:06:05.628174] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:44.493 [2024-11-05 16:06:05.628187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:44.493 [2024-11-05 16:06:05.628195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:44.493 [2024-11-05 16:06:05.628206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:44.493 [2024-11-05 16:06:05.628214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:44.493 [2024-11-05 16:06:05.628221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:44.493 [2024-11-05 16:06:05.628228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:44.493 [2024-11-05 16:06:05.628236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:44.493 [2024-11-05 16:06:05.628244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:44.493 [2024-11-05 16:06:05.628251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:44.493 [2024-11-05 16:06:05.628260] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:44.493 [2024-11-05 16:06:05.628270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:44.493 [2024-11-05 16:06:05.628286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:44.493 [2024-11-05 16:06:05.628294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:44.493 [2024-11-05 16:06:05.628301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:44.493 [2024-11-05 16:06:05.628308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:44.493 [2024-11-05 16:06:05.628315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:44.493 [2024-11-05 16:06:05.628324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:44.493 [2024-11-05 16:06:05.628332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:44.493 [2024-11-05 16:06:05.628340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:44.493 [2024-11-05 16:06:05.628347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:44.493 [2024-11-05 16:06:05.628383] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:44.493 [2024-11-05 16:06:05.628391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:44.493 [2024-11-05 16:06:05.628406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:44.493 [2024-11-05 16:06:05.628415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:44.493 [2024-11-05 16:06:05.628422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:44.493 [2024-11-05 16:06:05.628429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.628437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:44.493 [2024-11-05 16:06:05.628448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:40:44.493 [2024-11-05 16:06:05.628457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.660109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.660156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:44.493 [2024-11-05 16:06:05.660168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.600 ms 00:40:44.493 [2024-11-05 16:06:05.660177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.660310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.660326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:44.493 [2024-11-05 16:06:05.660336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:40:44.493 [2024-11-05 16:06:05.660344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.706428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.706481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:44.493 [2024-11-05 16:06:05.706495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.061 ms 00:40:44.493 [2024-11-05 16:06:05.706508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.706624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.706637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:44.493 [2024-11-05 16:06:05.706647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:44.493 [2024-11-05 16:06:05.706656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.707223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.707272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:44.493 [2024-11-05 16:06:05.707284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:40:44.493 [2024-11-05 16:06:05.707298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.707455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.707472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:44.493 [2024-11-05 16:06:05.707481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:40:44.493 [2024-11-05 16:06:05.707489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.723918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.723959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:44.493 [2024-11-05 16:06:05.723970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.405 ms 00:40:44.493 [2024-11-05 16:06:05.723979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.738179] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:40:44.493 [2024-11-05 16:06:05.738224] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:44.493 [2024-11-05 16:06:05.738237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.738246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:44.493 [2024-11-05 16:06:05.738255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.145 ms 00:40:44.493 [2024-11-05 16:06:05.738263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.763802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.493 [2024-11-05 16:06:05.763860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:44.493 [2024-11-05 16:06:05.763872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.433 ms 00:40:44.493 [2024-11-05 16:06:05.763880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.493 [2024-11-05 16:06:05.776437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.494 [2024-11-05 16:06:05.776482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:44.494 [2024-11-05 16:06:05.776494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.464 ms 00:40:44.494 [2024-11-05 16:06:05.776501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.494 [2024-11-05 16:06:05.789023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.494 [2024-11-05 16:06:05.789065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:44.494 [2024-11-05 16:06:05.789076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.439 ms 00:40:44.494 [2024-11-05 16:06:05.789083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.494 [2024-11-05 16:06:05.789720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.494 [2024-11-05 16:06:05.789759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:44.494 [2024-11-05 16:06:05.789772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:40:44.494 [2024-11-05 16:06:05.789780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.755 [2024-11-05 16:06:05.854063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.755 [2024-11-05 16:06:05.854121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:44.755 [2024-11-05 16:06:05.854140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.255 ms 00:40:44.755 [2024-11-05 16:06:05.854150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.865583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:44.756 [2024-11-05 16:06:05.885103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.885303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:44.756 [2024-11-05 16:06:05.885326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.843 ms 00:40:44.756 [2024-11-05 16:06:05.885336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.885448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.885461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:44.756 [2024-11-05 16:06:05.885471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:40:44.756 [2024-11-05 16:06:05.885479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.885538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.885548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:44.756 [2024-11-05 16:06:05.885557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:40:44.756 [2024-11-05 16:06:05.885565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.885593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.885605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:44.756 [2024-11-05 16:06:05.885615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:44.756 [2024-11-05 16:06:05.885623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.885664] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:44.756 [2024-11-05 16:06:05.885676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.885685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:44.756 [2024-11-05 16:06:05.885694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:40:44.756 [2024-11-05 16:06:05.885701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.911325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.911368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:44.756 [2024-11-05 16:06:05.911381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.600 ms 00:40:44.756 [2024-11-05 16:06:05.911390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.911526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:44.756 [2024-11-05 16:06:05.911538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:44.756 [2024-11-05 16:06:05.911549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:40:44.756 [2024-11-05 16:06:05.911558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:44.756 [2024-11-05 16:06:05.912661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:44.756 [2024-11-05 16:06:05.916129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.151 ms, result 0 00:40:44.756 [2024-11-05 16:06:05.917444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:44.756 [2024-11-05 16:06:05.931796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:45.018  [2024-11-05T16:06:06.380Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-05 16:06:06.191507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:45.018 [2024-11-05 16:06:06.200454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.018 [2024-11-05 16:06:06.200499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:45.018 [2024-11-05 16:06:06.200511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:45.018 [2024-11-05 16:06:06.200526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.018 [2024-11-05 16:06:06.200549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:45.019 [2024-11-05 16:06:06.203582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.203619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:45.019 [2024-11-05 16:06:06.203631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:40:45.019 [2024-11-05 16:06:06.203640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.206569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.206727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:45.019 [2024-11-05 16:06:06.206761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:40:45.019 [2024-11-05 16:06:06.206770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.211222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.211265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:45.019 [2024-11-05 16:06:06.211275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:40:45.019 [2024-11-05 16:06:06.211283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.218240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.218280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:45.019 [2024-11-05 16:06:06.218301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.924 ms 00:40:45.019 [2024-11-05 16:06:06.218309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.243284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.243326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:45.019 [2024-11-05 16:06:06.243338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.906 ms 00:40:45.019 [2024-11-05 16:06:06.243345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.258935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.258984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:45.019 [2024-11-05 16:06:06.259000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.541 ms 00:40:45.019 [2024-11-05 16:06:06.259009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.259157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.259168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:45.019 [2024-11-05 16:06:06.259177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:40:45.019 [2024-11-05 16:06:06.259185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.284820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.284992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:45.019 [2024-11-05 16:06:06.285012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.609 ms 00:40:45.019 [2024-11-05 16:06:06.285020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.310157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.310200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:45.019 [2024-11-05 16:06:06.310211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.083 ms 00:40:45.019 [2024-11-05 16:06:06.310218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.334762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.334816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:45.019 [2024-11-05 16:06:06.334826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.496 ms 00:40:45.019 [2024-11-05 16:06:06.334834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.359145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.019 [2024-11-05 16:06:06.359186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:45.019 [2024-11-05 16:06:06.359197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.237 ms 00:40:45.019 [2024-11-05 16:06:06.359204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.019 [2024-11-05 16:06:06.359250] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:45.019 [2024-11-05 16:06:06.359265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:45.019 [2024-11-05 16:06:06.359645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.359995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:45.020 [2024-11-05 16:06:06.360112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:45.020 [2024-11-05 16:06:06.360121] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:40:45.020 [2024-11-05 16:06:06.360129] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:45.020 [2024-11-05 16:06:06.360137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:45.020 [2024-11-05 16:06:06.360144] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:45.020 [2024-11-05 16:06:06.360153] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:45.020 [2024-11-05 16:06:06.360160] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:45.020 [2024-11-05 16:06:06.360169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:45.020 [2024-11-05 16:06:06.360176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:45.020 [2024-11-05 16:06:06.360183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:45.020 [2024-11-05 16:06:06.360189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:45.020 [2024-11-05 16:06:06.360196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.020 [2024-11-05 16:06:06.360206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:45.020 [2024-11-05 16:06:06.360215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:40:45.020 [2024-11-05 16:06:06.360223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.020 [2024-11-05 16:06:06.373556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.020 [2024-11-05 16:06:06.373596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:45.020 [2024-11-05 16:06:06.373608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.301 ms 00:40:45.020 [2024-11-05 16:06:06.373616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.020 [2024-11-05 16:06:06.374040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:45.020 [2024-11-05 16:06:06.374054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:45.020 [2024-11-05 16:06:06.374063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:40:45.020 [2024-11-05 16:06:06.374071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.281 [2024-11-05 16:06:06.412906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.281 [2024-11-05 16:06:06.412953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:45.281 [2024-11-05 16:06:06.412965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.281 [2024-11-05 16:06:06.412974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.281 [2024-11-05 16:06:06.413057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.281 [2024-11-05 16:06:06.413067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:45.281 [2024-11-05 16:06:06.413075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.281 [2024-11-05 16:06:06.413082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.281 [2024-11-05 16:06:06.413131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.281 [2024-11-05 16:06:06.413141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:45.281 [2024-11-05 16:06:06.413149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.281 [2024-11-05 16:06:06.413156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.281 [2024-11-05 16:06:06.413174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.281 [2024-11-05 16:06:06.413186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:45.281 [2024-11-05 16:06:06.413194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.281 [2024-11-05 16:06:06.413201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.281 [2024-11-05 16:06:06.498540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.281 [2024-11-05 16:06:06.498595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:45.281 [2024-11-05 16:06:06.498608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.498617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.568900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.568955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:45.282 [2024-11-05 16:06:06.568967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.568976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.569047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:45.282 [2024-11-05 16:06:06.569056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.569065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.569107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:45.282 [2024-11-05 16:06:06.569122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.569131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.569240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:45.282 [2024-11-05 16:06:06.569255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.569263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.569311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:45.282 [2024-11-05 16:06:06.569320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.569332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.569383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:45.282 [2024-11-05 16:06:06.569392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.569401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:45.282 [2024-11-05 16:06:06.569460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:45.282 [2024-11-05 16:06:06.569473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:45.282 [2024-11-05 16:06:06.569482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:45.282 [2024-11-05 16:06:06.569640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.167 ms, result 0 00:40:46.270 00:40:46.270 00:40:46.270 16:06:07 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74084 00:40:46.270 16:06:07 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74084 00:40:46.270 16:06:07 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:40:46.270 16:06:07 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74084 ']' 00:40:46.270 16:06:07 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:46.270 16:06:07 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:46.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:46.270 16:06:07 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:46.270 16:06:07 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:46.270 16:06:07 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:40:46.270 [2024-11-05 16:06:07.402559] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:40:46.270 [2024-11-05 16:06:07.402707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74084 ] 00:40:46.270 [2024-11-05 16:06:07.566228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.532 [2024-11-05 16:06:07.685164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.104 16:06:08 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:47.104 16:06:08 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:40:47.104 16:06:08 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:40:47.365 [2024-11-05 16:06:08.591964] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:47.365 [2024-11-05 16:06:08.592041] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:47.627 [2024-11-05 16:06:08.750687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.750947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:47.627 [2024-11-05 16:06:08.750979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:47.627 [2024-11-05 16:06:08.750989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.753985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.754157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:47.627 [2024-11-05 16:06:08.754180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:40:47.627 [2024-11-05 16:06:08.754189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.754428] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:47.627 [2024-11-05 16:06:08.755672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:47.627 [2024-11-05 16:06:08.755756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.755768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:47.627 [2024-11-05 16:06:08.755780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:40:47.627 [2024-11-05 16:06:08.755788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.757543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:47.627 [2024-11-05 16:06:08.771520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.771573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:47.627 [2024-11-05 16:06:08.771587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.985 ms 00:40:47.627 [2024-11-05 16:06:08.771598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.771711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.771725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:47.627 [2024-11-05 16:06:08.771756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:40:47.627 [2024-11-05 16:06:08.771767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.779794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.779840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:47.627 [2024-11-05 16:06:08.779851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.972 ms 00:40:47.627 [2024-11-05 16:06:08.779860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.779979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.779993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:47.627 [2024-11-05 16:06:08.780002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:40:47.627 [2024-11-05 16:06:08.780012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.780044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.780054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:47.627 [2024-11-05 16:06:08.780062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:47.627 [2024-11-05 16:06:08.780071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.780096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:47.627 [2024-11-05 16:06:08.784026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.784063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:47.627 [2024-11-05 16:06:08.784077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.934 ms 00:40:47.627 [2024-11-05 16:06:08.784084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.784162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.784171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:47.627 [2024-11-05 16:06:08.784188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:47.627 [2024-11-05 16:06:08.784199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.784221] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:47.627 [2024-11-05 16:06:08.784241] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:47.627 [2024-11-05 16:06:08.784285] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:47.627 [2024-11-05 16:06:08.784300] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:47.627 [2024-11-05 16:06:08.784412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:47.627 [2024-11-05 16:06:08.784424] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:47.627 [2024-11-05 16:06:08.784437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:47.627 [2024-11-05 16:06:08.784449] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:47.627 [2024-11-05 16:06:08.784461] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:47.627 [2024-11-05 16:06:08.784469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:47.627 [2024-11-05 16:06:08.784479] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:47.627 [2024-11-05 16:06:08.784486] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:47.627 [2024-11-05 16:06:08.784498] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:47.627 [2024-11-05 16:06:08.784506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.784516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:47.627 [2024-11-05 16:06:08.784525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:40:47.627 [2024-11-05 16:06:08.784534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.784622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.627 [2024-11-05 16:06:08.784633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:47.627 [2024-11-05 16:06:08.784640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:47.627 [2024-11-05 16:06:08.784649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.627 [2024-11-05 16:06:08.784777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:47.627 [2024-11-05 16:06:08.784790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:47.627 [2024-11-05 16:06:08.784799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:47.627 [2024-11-05 16:06:08.784809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:47.627 [2024-11-05 16:06:08.784828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:47.627 [2024-11-05 16:06:08.784846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:47.627 [2024-11-05 16:06:08.784854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:47.627 [2024-11-05 16:06:08.784871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:47.627 [2024-11-05 16:06:08.784880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:47.627 [2024-11-05 16:06:08.784887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:47.627 [2024-11-05 16:06:08.784896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:47.627 [2024-11-05 16:06:08.784904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:47.627 [2024-11-05 16:06:08.784916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:47.627 [2024-11-05 16:06:08.784934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:47.627 [2024-11-05 16:06:08.784940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:47.627 [2024-11-05 16:06:08.784964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:47.627 [2024-11-05 16:06:08.784979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:47.627 [2024-11-05 16:06:08.784989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:47.627 [2024-11-05 16:06:08.784996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:47.627 [2024-11-05 16:06:08.785005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:47.627 [2024-11-05 16:06:08.785012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:47.627 [2024-11-05 16:06:08.785019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:47.628 [2024-11-05 16:06:08.785026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:47.628 [2024-11-05 16:06:08.785036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:47.628 [2024-11-05 16:06:08.785042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:47.628 [2024-11-05 16:06:08.785051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:47.628 [2024-11-05 16:06:08.785057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:47.628 [2024-11-05 16:06:08.785066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:47.628 [2024-11-05 16:06:08.785073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:47.628 [2024-11-05 16:06:08.785081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:47.628 [2024-11-05 16:06:08.785087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:47.628 [2024-11-05 16:06:08.785095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:47.628 [2024-11-05 16:06:08.785102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:47.628 [2024-11-05 16:06:08.785119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:47.628 [2024-11-05 16:06:08.785125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:47.628 [2024-11-05 16:06:08.785134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:47.628 [2024-11-05 16:06:08.785140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:47.628 [2024-11-05 16:06:08.785148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:47.628 [2024-11-05 16:06:08.785156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:47.628 [2024-11-05 16:06:08.785167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:47.628 [2024-11-05 16:06:08.785174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:47.628 [2024-11-05 16:06:08.785184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:47.628 [2024-11-05 16:06:08.785193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:47.628 [2024-11-05 16:06:08.785203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:47.628 [2024-11-05 16:06:08.785210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:47.628 [2024-11-05 16:06:08.785218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:47.628 [2024-11-05 16:06:08.785225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:47.628 [2024-11-05 16:06:08.785235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:47.628 [2024-11-05 16:06:08.785244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:47.628 [2024-11-05 16:06:08.785266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:47.628 [2024-11-05 16:06:08.785275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:47.628 [2024-11-05 16:06:08.785282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:47.628 [2024-11-05 16:06:08.785291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:47.628 [2024-11-05 16:06:08.785299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:47.628 [2024-11-05 16:06:08.785309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:47.628 [2024-11-05 16:06:08.785317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:47.628 [2024-11-05 16:06:08.785327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:47.628 [2024-11-05 16:06:08.785335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:47.628 [2024-11-05 16:06:08.785376] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:47.628 [2024-11-05 16:06:08.785384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:47.628 [2024-11-05 16:06:08.785403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:47.628 [2024-11-05 16:06:08.785412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:47.628 [2024-11-05 16:06:08.785420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:47.628 [2024-11-05 16:06:08.785430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.785437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:47.628 [2024-11-05 16:06:08.785447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:40:47.628 [2024-11-05 16:06:08.785454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.816971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.817020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:47.628 [2024-11-05 16:06:08.817035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.453 ms 00:40:47.628 [2024-11-05 16:06:08.817043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.817180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.817191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:47.628 [2024-11-05 16:06:08.817202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:40:47.628 [2024-11-05 16:06:08.817210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.851926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.851968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:47.628 [2024-11-05 16:06:08.851986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.689 ms 00:40:47.628 [2024-11-05 16:06:08.851994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.852081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.852092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:47.628 [2024-11-05 16:06:08.852103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:47.628 [2024-11-05 16:06:08.852110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.852627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.852656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:47.628 [2024-11-05 16:06:08.852671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:40:47.628 [2024-11-05 16:06:08.852679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.852852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.852861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:47.628 [2024-11-05 16:06:08.852872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:40:47.628 [2024-11-05 16:06:08.852880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.870412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.870454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:47.628 [2024-11-05 16:06:08.870468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.506 ms 00:40:47.628 [2024-11-05 16:06:08.870476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.884691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:47.628 [2024-11-05 16:06:08.884754] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:47.628 [2024-11-05 16:06:08.884771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.884780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:47.628 [2024-11-05 16:06:08.884792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.180 ms 00:40:47.628 [2024-11-05 16:06:08.884800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.910724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.910780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:47.628 [2024-11-05 16:06:08.910796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.826 ms 00:40:47.628 [2024-11-05 16:06:08.910804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.923502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.923544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:47.628 [2024-11-05 16:06:08.923562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.623 ms 00:40:47.628 [2024-11-05 16:06:08.923569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.935892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.935944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:47.628 [2024-11-05 16:06:08.935958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.235 ms 00:40:47.628 [2024-11-05 16:06:08.935966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.628 [2024-11-05 16:06:08.936633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.628 [2024-11-05 16:06:08.936654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:47.628 [2024-11-05 16:06:08.936667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:40:47.628 [2024-11-05 16:06:08.936675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.007066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.007140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:47.889 [2024-11-05 16:06:09.007160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.360 ms 00:40:47.889 [2024-11-05 16:06:09.007170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.018687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:47.889 [2024-11-05 16:06:09.037662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.037721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:47.889 [2024-11-05 16:06:09.037757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.384 ms 00:40:47.889 [2024-11-05 16:06:09.037768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.037863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.037877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:47.889 [2024-11-05 16:06:09.037886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:40:47.889 [2024-11-05 16:06:09.037898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.037983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.037994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:47.889 [2024-11-05 16:06:09.038003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:40:47.889 [2024-11-05 16:06:09.038013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.038041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.038053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:47.889 [2024-11-05 16:06:09.038062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:47.889 [2024-11-05 16:06:09.038075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.038111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:47.889 [2024-11-05 16:06:09.038125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.038133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:47.889 [2024-11-05 16:06:09.038147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:47.889 [2024-11-05 16:06:09.038154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.064088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.064136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:47.889 [2024-11-05 16:06:09.064154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.903 ms 00:40:47.889 [2024-11-05 16:06:09.064162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.064296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:47.889 [2024-11-05 16:06:09.064309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:47.889 [2024-11-05 16:06:09.064320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:40:47.889 [2024-11-05 16:06:09.064332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:47.889 [2024-11-05 16:06:09.065418] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:47.889 [2024-11-05 16:06:09.068850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.417 ms, result 0 00:40:47.889 [2024-11-05 16:06:09.070684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:47.889 Some configs were skipped because the RPC state that can call them passed over. 00:40:47.889 16:06:09 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:40:48.150 [2024-11-05 16:06:09.311656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.150 [2024-11-05 16:06:09.311719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:48.150 [2024-11-05 16:06:09.311755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:40:48.150 [2024-11-05 16:06:09.311768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.150 [2024-11-05 16:06:09.311807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.528 ms, result 0 00:40:48.150 true 00:40:48.150 16:06:09 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:40:48.410 [2024-11-05 16:06:09.523388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.410 [2024-11-05 16:06:09.523449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:40:48.410 [2024-11-05 16:06:09.523465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:40:48.410 [2024-11-05 16:06:09.523473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.410 [2024-11-05 16:06:09.523514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.986 ms, result 0 00:40:48.410 true 00:40:48.410 16:06:09 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74084 00:40:48.410 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74084 ']' 00:40:48.410 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74084 00:40:48.410 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:40:48.410 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:48.410 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74084 00:40:48.410 killing process with pid 74084 00:40:48.410 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:48.411 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:48.411 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74084' 00:40:48.411 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74084 00:40:48.411 16:06:09 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74084 00:40:48.979 [2024-11-05 16:06:10.200019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.200072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:48.979 [2024-11-05 16:06:10.200082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:48.979 [2024-11-05 16:06:10.200090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.200117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:40:48.979 [2024-11-05 16:06:10.202174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.202202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:48.979 [2024-11-05 16:06:10.202213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:40:48.979 [2024-11-05 16:06:10.202220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.202447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.202459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:48.979 [2024-11-05 16:06:10.202467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:40:48.979 [2024-11-05 16:06:10.202473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.205628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.205652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:48.979 [2024-11-05 16:06:10.205663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:40:48.979 [2024-11-05 16:06:10.205669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.210930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.210957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:48.979 [2024-11-05 16:06:10.210967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.233 ms 00:40:48.979 [2024-11-05 16:06:10.210973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.218063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.218089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:48.979 [2024-11-05 16:06:10.218099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.034 ms 00:40:48.979 [2024-11-05 16:06:10.218109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.223838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.223865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:48.979 [2024-11-05 16:06:10.223876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.697 ms 00:40:48.979 [2024-11-05 16:06:10.223882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.223971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.223977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:48.979 [2024-11-05 16:06:10.223985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:40:48.979 [2024-11-05 16:06:10.223991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.231638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.231665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:48.979 [2024-11-05 16:06:10.231673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:40:48.979 [2024-11-05 16:06:10.231679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.238933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.238957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:48.979 [2024-11-05 16:06:10.238966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.215 ms 00:40:48.979 [2024-11-05 16:06:10.238972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.245608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.245634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:48.979 [2024-11-05 16:06:10.245644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:40:48.979 [2024-11-05 16:06:10.245649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.252628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.979 [2024-11-05 16:06:10.252655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:48.979 [2024-11-05 16:06:10.252663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.930 ms 00:40:48.979 [2024-11-05 16:06:10.252669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.979 [2024-11-05 16:06:10.252696] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:48.979 [2024-11-05 16:06:10.252707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:48.979 [2024-11-05 16:06:10.252934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.252998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:48.980 [2024-11-05 16:06:10.253373] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:48.980 [2024-11-05 16:06:10.253387] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:40:48.980 [2024-11-05 16:06:10.253396] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:48.980 [2024-11-05 16:06:10.253405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:48.980 [2024-11-05 16:06:10.253411] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:48.980 [2024-11-05 16:06:10.253418] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:48.980 [2024-11-05 16:06:10.253423] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:48.980 [2024-11-05 16:06:10.253430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:48.980 [2024-11-05 16:06:10.253436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:48.980 [2024-11-05 16:06:10.253443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:48.980 [2024-11-05 16:06:10.253447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:48.980 [2024-11-05 16:06:10.253454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.980 [2024-11-05 16:06:10.253459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:48.980 [2024-11-05 16:06:10.253467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:40:48.980 [2024-11-05 16:06:10.253472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.980 [2024-11-05 16:06:10.263101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.980 [2024-11-05 16:06:10.263125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:48.980 [2024-11-05 16:06:10.263136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.611 ms 00:40:48.980 [2024-11-05 16:06:10.263142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.980 [2024-11-05 16:06:10.263435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:48.980 [2024-11-05 16:06:10.263448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:48.980 [2024-11-05 16:06:10.263457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:40:48.980 [2024-11-05 16:06:10.263464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.980 [2024-11-05 16:06:10.298364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:48.980 [2024-11-05 16:06:10.298393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:48.980 [2024-11-05 16:06:10.298402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:48.980 [2024-11-05 16:06:10.298408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.980 [2024-11-05 16:06:10.298484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:48.981 [2024-11-05 16:06:10.298491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:48.981 [2024-11-05 16:06:10.298499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:48.981 [2024-11-05 16:06:10.298506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.981 [2024-11-05 16:06:10.298544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:48.981 [2024-11-05 16:06:10.298552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:48.981 [2024-11-05 16:06:10.298561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:48.981 [2024-11-05 16:06:10.298567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:48.981 [2024-11-05 16:06:10.298582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:48.981 [2024-11-05 16:06:10.298588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:48.981 [2024-11-05 16:06:10.298595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:48.981 [2024-11-05 16:06:10.298600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.358731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.358770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:49.239 [2024-11-05 16:06:10.358781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.358787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:49.239 [2024-11-05 16:06:10.408235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:49.239 [2024-11-05 16:06:10.408319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:49.239 [2024-11-05 16:06:10.408363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:49.239 [2024-11-05 16:06:10.408452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:49.239 [2024-11-05 16:06:10.408498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:49.239 [2024-11-05 16:06:10.408550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:49.239 [2024-11-05 16:06:10.408599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:49.239 [2024-11-05 16:06:10.408607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:49.239 [2024-11-05 16:06:10.408613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:49.239 [2024-11-05 16:06:10.408713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 208.678 ms, result 0 00:40:49.805 16:06:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:49.805 [2024-11-05 16:06:10.986065] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:40:49.805 [2024-11-05 16:06:10.986536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74137 ] 00:40:49.805 [2024-11-05 16:06:11.143017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.063 [2024-11-05 16:06:11.224246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.325 [2024-11-05 16:06:11.428503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:50.325 [2024-11-05 16:06:11.428549] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:50.325 [2024-11-05 16:06:11.585588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.585637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:50.325 [2024-11-05 16:06:11.585651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:50.325 [2024-11-05 16:06:11.585659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.588410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.588451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:50.325 [2024-11-05 16:06:11.588461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:40:50.325 [2024-11-05 16:06:11.588468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.588547] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:50.325 [2024-11-05 16:06:11.589275] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:50.325 [2024-11-05 16:06:11.589306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.589314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:50.325 [2024-11-05 16:06:11.589323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:40:50.325 [2024-11-05 16:06:11.589330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.591108] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:50.325 [2024-11-05 16:06:11.604221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.604268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:50.325 [2024-11-05 16:06:11.604281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.115 ms 00:40:50.325 [2024-11-05 16:06:11.604289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.604392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.604405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:50.325 [2024-11-05 16:06:11.604414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:40:50.325 [2024-11-05 16:06:11.604424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.610981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.611015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:50.325 [2024-11-05 16:06:11.611025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.514 ms 00:40:50.325 [2024-11-05 16:06:11.611033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.611128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.611138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:50.325 [2024-11-05 16:06:11.611147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:40:50.325 [2024-11-05 16:06:11.611154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.611183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.611193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:50.325 [2024-11-05 16:06:11.611201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:50.325 [2024-11-05 16:06:11.611209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.611231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:40:50.325 [2024-11-05 16:06:11.615030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.615063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:50.325 [2024-11-05 16:06:11.615073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:40:50.325 [2024-11-05 16:06:11.615081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.615147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.615158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:50.325 [2024-11-05 16:06:11.615166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:50.325 [2024-11-05 16:06:11.615173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.615194] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:50.325 [2024-11-05 16:06:11.615214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:50.325 [2024-11-05 16:06:11.615254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:50.325 [2024-11-05 16:06:11.615270] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:50.325 [2024-11-05 16:06:11.615374] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:50.325 [2024-11-05 16:06:11.615385] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:50.325 [2024-11-05 16:06:11.615395] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:50.325 [2024-11-05 16:06:11.615409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:50.325 [2024-11-05 16:06:11.615417] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:50.325 [2024-11-05 16:06:11.615425] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:40:50.325 [2024-11-05 16:06:11.615432] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:50.325 [2024-11-05 16:06:11.615440] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:50.325 [2024-11-05 16:06:11.615447] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:50.325 [2024-11-05 16:06:11.615456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.615464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:50.325 [2024-11-05 16:06:11.615472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:40:50.325 [2024-11-05 16:06:11.615480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.615568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.325 [2024-11-05 16:06:11.615579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:50.325 [2024-11-05 16:06:11.615587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:40:50.325 [2024-11-05 16:06:11.615595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.325 [2024-11-05 16:06:11.615692] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:50.325 [2024-11-05 16:06:11.615703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:50.325 [2024-11-05 16:06:11.615712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:50.325 [2024-11-05 16:06:11.615719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:50.325 [2024-11-05 16:06:11.615728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:50.325 [2024-11-05 16:06:11.615752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:50.325 [2024-11-05 16:06:11.615760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:40:50.325 [2024-11-05 16:06:11.615767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:50.325 [2024-11-05 16:06:11.615774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:40:50.325 [2024-11-05 16:06:11.615781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:50.325 [2024-11-05 16:06:11.615788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:50.325 [2024-11-05 16:06:11.615797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:40:50.325 [2024-11-05 16:06:11.615805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:50.325 [2024-11-05 16:06:11.615820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:50.325 [2024-11-05 16:06:11.615827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:40:50.325 [2024-11-05 16:06:11.615834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:50.325 [2024-11-05 16:06:11.615841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:50.325 [2024-11-05 16:06:11.615849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:40:50.325 [2024-11-05 16:06:11.615855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:50.325 [2024-11-05 16:06:11.615863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:50.325 [2024-11-05 16:06:11.615870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:40:50.325 [2024-11-05 16:06:11.615878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:50.325 [2024-11-05 16:06:11.615884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:50.326 [2024-11-05 16:06:11.615892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:40:50.326 [2024-11-05 16:06:11.615898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:50.326 [2024-11-05 16:06:11.615907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:50.326 [2024-11-05 16:06:11.615914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:40:50.326 [2024-11-05 16:06:11.615921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:50.326 [2024-11-05 16:06:11.615928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:50.326 [2024-11-05 16:06:11.615934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:40:50.326 [2024-11-05 16:06:11.615941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:50.326 [2024-11-05 16:06:11.615947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:50.326 [2024-11-05 16:06:11.615954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:40:50.326 [2024-11-05 16:06:11.615960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:50.326 [2024-11-05 16:06:11.615967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:50.326 [2024-11-05 16:06:11.615973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:40:50.326 [2024-11-05 16:06:11.615980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:50.326 [2024-11-05 16:06:11.615986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:50.326 [2024-11-05 16:06:11.615993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:40:50.326 [2024-11-05 16:06:11.616000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:50.326 [2024-11-05 16:06:11.616007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:50.326 [2024-11-05 16:06:11.616014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:40:50.326 [2024-11-05 16:06:11.616021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:50.326 [2024-11-05 16:06:11.616029] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:50.326 [2024-11-05 16:06:11.616037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:50.326 [2024-11-05 16:06:11.616047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:50.326 [2024-11-05 16:06:11.616054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:50.326 [2024-11-05 16:06:11.616062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:50.326 [2024-11-05 16:06:11.616069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:50.326 [2024-11-05 16:06:11.616076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:50.326 [2024-11-05 16:06:11.616083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:50.326 [2024-11-05 16:06:11.616090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:50.326 [2024-11-05 16:06:11.616097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:50.326 [2024-11-05 16:06:11.616106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:50.326 [2024-11-05 16:06:11.616116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:40:50.326 [2024-11-05 16:06:11.616134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:40:50.326 [2024-11-05 16:06:11.616141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:40:50.326 [2024-11-05 16:06:11.616149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:40:50.326 [2024-11-05 16:06:11.616156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:40:50.326 [2024-11-05 16:06:11.616163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:40:50.326 [2024-11-05 16:06:11.616171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:40:50.326 [2024-11-05 16:06:11.616179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:40:50.326 [2024-11-05 16:06:11.616187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:40:50.326 [2024-11-05 16:06:11.616195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:40:50.326 [2024-11-05 16:06:11.616233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:50.326 [2024-11-05 16:06:11.616242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:50.326 [2024-11-05 16:06:11.616258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:50.326 [2024-11-05 16:06:11.616266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:50.326 [2024-11-05 16:06:11.616273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:50.326 [2024-11-05 16:06:11.616282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.326 [2024-11-05 16:06:11.616293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:50.326 [2024-11-05 16:06:11.616301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:40:50.326 [2024-11-05 16:06:11.616309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.326 [2024-11-05 16:06:11.646984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.326 [2024-11-05 16:06:11.647032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:50.326 [2024-11-05 16:06:11.647043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.620 ms 00:40:50.326 [2024-11-05 16:06:11.647052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.326 [2024-11-05 16:06:11.647187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.326 [2024-11-05 16:06:11.647199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:50.326 [2024-11-05 16:06:11.647207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:40:50.326 [2024-11-05 16:06:11.647217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.694553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.694608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:50.588 [2024-11-05 16:06:11.694625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.314 ms 00:40:50.588 [2024-11-05 16:06:11.694634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.694768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.694782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:50.588 [2024-11-05 16:06:11.694793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:50.588 [2024-11-05 16:06:11.694803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.695302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.695345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:50.588 [2024-11-05 16:06:11.695356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:40:50.588 [2024-11-05 16:06:11.695370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.695520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.695530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:50.588 [2024-11-05 16:06:11.695539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:40:50.588 [2024-11-05 16:06:11.695547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.711528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.711574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:50.588 [2024-11-05 16:06:11.711586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.959 ms 00:40:50.588 [2024-11-05 16:06:11.711594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.725814] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:50.588 [2024-11-05 16:06:11.725862] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:50.588 [2024-11-05 16:06:11.725876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.725885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:50.588 [2024-11-05 16:06:11.725894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.164 ms 00:40:50.588 [2024-11-05 16:06:11.725901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.751576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.751648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:50.588 [2024-11-05 16:06:11.751661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.580 ms 00:40:50.588 [2024-11-05 16:06:11.751669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.764608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.764654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:50.588 [2024-11-05 16:06:11.764666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.832 ms 00:40:50.588 [2024-11-05 16:06:11.764674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.777190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.777235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:50.588 [2024-11-05 16:06:11.777247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.407 ms 00:40:50.588 [2024-11-05 16:06:11.777254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.777918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.777951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:50.588 [2024-11-05 16:06:11.777962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:40:50.588 [2024-11-05 16:06:11.777970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.841911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.841980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:50.588 [2024-11-05 16:06:11.841998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.914 ms 00:40:50.588 [2024-11-05 16:06:11.842007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.853168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:40:50.588 [2024-11-05 16:06:11.871857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.871904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:50.588 [2024-11-05 16:06:11.871918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.750 ms 00:40:50.588 [2024-11-05 16:06:11.871933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.872026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.588 [2024-11-05 16:06:11.872038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:50.588 [2024-11-05 16:06:11.872048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:40:50.588 [2024-11-05 16:06:11.872057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.588 [2024-11-05 16:06:11.872116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.589 [2024-11-05 16:06:11.872128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:50.589 [2024-11-05 16:06:11.872137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:40:50.589 [2024-11-05 16:06:11.872150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.589 [2024-11-05 16:06:11.872176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.589 [2024-11-05 16:06:11.872185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:50.589 [2024-11-05 16:06:11.872193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:50.589 [2024-11-05 16:06:11.872201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.589 [2024-11-05 16:06:11.872239] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:50.589 [2024-11-05 16:06:11.872249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.589 [2024-11-05 16:06:11.872257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:50.589 [2024-11-05 16:06:11.872266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:50.589 [2024-11-05 16:06:11.872274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.589 [2024-11-05 16:06:11.898534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.589 [2024-11-05 16:06:11.898586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:50.589 [2024-11-05 16:06:11.898600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.237 ms 00:40:50.589 [2024-11-05 16:06:11.898609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.589 [2024-11-05 16:06:11.898727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:50.589 [2024-11-05 16:06:11.898756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:50.589 [2024-11-05 16:06:11.898767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:40:50.589 [2024-11-05 16:06:11.898777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:50.589 [2024-11-05 16:06:11.899925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:50.589 [2024-11-05 16:06:11.903527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.966 ms, result 0 00:40:50.589 [2024-11-05 16:06:11.904940] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:50.589 [2024-11-05 16:06:11.918580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:51.976  [2024-11-05T16:06:14.283Z] Copying: 15/256 [MB] (15 MBps) [2024-11-05T16:06:15.227Z] Copying: 27/256 [MB] (11 MBps) [2024-11-05T16:06:16.171Z] Copying: 38/256 [MB] (10 MBps) [2024-11-05T16:06:17.111Z] Copying: 48/256 [MB] (10 MBps) [2024-11-05T16:06:18.052Z] Copying: 58/256 [MB] (10 MBps) [2024-11-05T16:06:18.992Z] Copying: 73/256 [MB] (14 MBps) [2024-11-05T16:06:20.389Z] Copying: 93/256 [MB] (20 MBps) [2024-11-05T16:06:21.332Z] Copying: 111/256 [MB] (18 MBps) [2024-11-05T16:06:22.277Z] Copying: 130/256 [MB] (19 MBps) [2024-11-05T16:06:23.217Z] Copying: 145/256 [MB] (14 MBps) [2024-11-05T16:06:24.159Z] Copying: 158/256 [MB] (12 MBps) [2024-11-05T16:06:25.104Z] Copying: 184/256 [MB] (26 MBps) [2024-11-05T16:06:26.046Z] Copying: 206/256 [MB] (21 MBps) [2024-11-05T16:06:26.989Z] Copying: 225/256 [MB] (19 MBps) [2024-11-05T16:06:27.933Z] Copying: 243/256 [MB] (17 MBps) [2024-11-05T16:06:28.194Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-05 16:06:28.153074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:06.832 [2024-11-05 16:06:28.168163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:06.832 [2024-11-05 16:06:28.168219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:06.832 [2024-11-05 16:06:28.168236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:06.832 [2024-11-05 16:06:28.168257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:06.832 [2024-11-05 16:06:28.168285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:41:06.832 [2024-11-05 16:06:28.171391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:06.832 [2024-11-05 16:06:28.171436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:06.832 [2024-11-05 16:06:28.171449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:41:06.832 [2024-11-05 16:06:28.171458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:06.832 [2024-11-05 16:06:28.171766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:06.832 [2024-11-05 16:06:28.171780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:06.832 [2024-11-05 16:06:28.171790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:41:06.832 [2024-11-05 16:06:28.171798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:06.832 [2024-11-05 16:06:28.175513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:06.832 [2024-11-05 16:06:28.175563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:06.832 [2024-11-05 16:06:28.175573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.698 ms 00:41:06.832 [2024-11-05 16:06:28.175581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:06.832 [2024-11-05 16:06:28.182487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:06.832 [2024-11-05 16:06:28.182527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:06.832 [2024-11-05 16:06:28.182539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:41:06.832 [2024-11-05 16:06:28.182548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.094 [2024-11-05 16:06:28.209068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.094 [2024-11-05 16:06:28.209121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:07.094 [2024-11-05 16:06:28.209135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.441 ms 00:41:07.095 [2024-11-05 16:06:28.209143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.225649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.095 [2024-11-05 16:06:28.225705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:07.095 [2024-11-05 16:06:28.225726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.428 ms 00:41:07.095 [2024-11-05 16:06:28.225751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.225926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.095 [2024-11-05 16:06:28.225940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:07.095 [2024-11-05 16:06:28.225951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:41:07.095 [2024-11-05 16:06:28.225959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.252551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.095 [2024-11-05 16:06:28.252602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:07.095 [2024-11-05 16:06:28.252615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.564 ms 00:41:07.095 [2024-11-05 16:06:28.252623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.278744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.095 [2024-11-05 16:06:28.278794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:07.095 [2024-11-05 16:06:28.278807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:41:07.095 [2024-11-05 16:06:28.278817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.304765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.095 [2024-11-05 16:06:28.304815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:07.095 [2024-11-05 16:06:28.304828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:41:07.095 [2024-11-05 16:06:28.304836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.330205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.095 [2024-11-05 16:06:28.330257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:07.095 [2024-11-05 16:06:28.330272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.263 ms 00:41:07.095 [2024-11-05 16:06:28.330280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.095 [2024-11-05 16:06:28.330355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:07.095 [2024-11-05 16:06:28.330373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:07.095 [2024-11-05 16:06:28.330968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.330976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.330984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.330991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.330998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:07.096 [2024-11-05 16:06:28.331257] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:07.096 [2024-11-05 16:06:28.331265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 00f76824-5a3a-4487-9ed4-1ffd3d9e229e 00:41:07.096 [2024-11-05 16:06:28.331274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:07.096 [2024-11-05 16:06:28.331282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:07.096 [2024-11-05 16:06:28.331298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:07.096 [2024-11-05 16:06:28.331307] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:07.096 [2024-11-05 16:06:28.331315] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:07.096 [2024-11-05 16:06:28.331323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:07.096 [2024-11-05 16:06:28.331334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:07.096 [2024-11-05 16:06:28.331342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:07.096 [2024-11-05 16:06:28.331350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:07.096 [2024-11-05 16:06:28.331358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.096 [2024-11-05 16:06:28.331367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:07.096 [2024-11-05 16:06:28.331377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:41:07.096 [2024-11-05 16:06:28.331384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.096 [2024-11-05 16:06:28.345116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.096 [2024-11-05 16:06:28.345161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:07.096 [2024-11-05 16:06:28.345174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.693 ms 00:41:07.096 [2024-11-05 16:06:28.345183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.096 [2024-11-05 16:06:28.345597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:07.096 [2024-11-05 16:06:28.345609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:07.096 [2024-11-05 16:06:28.345618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:41:07.096 [2024-11-05 16:06:28.345626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.096 [2024-11-05 16:06:28.385182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.096 [2024-11-05 16:06:28.385234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:07.096 [2024-11-05 16:06:28.385246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.096 [2024-11-05 16:06:28.385255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.096 [2024-11-05 16:06:28.385366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.096 [2024-11-05 16:06:28.385378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:07.096 [2024-11-05 16:06:28.385388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.096 [2024-11-05 16:06:28.385397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.096 [2024-11-05 16:06:28.385459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.096 [2024-11-05 16:06:28.385469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:07.096 [2024-11-05 16:06:28.385478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.096 [2024-11-05 16:06:28.385486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.096 [2024-11-05 16:06:28.385508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.096 [2024-11-05 16:06:28.385518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:07.096 [2024-11-05 16:06:28.385528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.096 [2024-11-05 16:06:28.385536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.483158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.483245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:07.357 [2024-11-05 16:06:28.483264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.483274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:07.357 [2024-11-05 16:06:28.559125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:07.357 [2024-11-05 16:06:28.559288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:07.357 [2024-11-05 16:06:28.559363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:07.357 [2024-11-05 16:06:28.559518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:07.357 [2024-11-05 16:06:28.559593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:07.357 [2024-11-05 16:06:28.559681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.559783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:07.357 [2024-11-05 16:06:28.559801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:07.357 [2024-11-05 16:06:28.559812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:07.357 [2024-11-05 16:06:28.559822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:07.357 [2024-11-05 16:06:28.560014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.838 ms, result 0 00:41:07.928 00:41:07.928 00:41:08.188 16:06:29 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:08.762 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:41:08.762 16:06:29 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74084 00:41:08.762 16:06:29 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74084 ']' 00:41:08.762 Process with pid 74084 is not found 00:41:08.762 16:06:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74084 00:41:08.762 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74084) - No such process 00:41:08.762 16:06:29 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74084 is not found' 00:41:08.762 00:41:08.762 real 1m17.841s 00:41:08.762 user 1m33.815s 00:41:08.762 sys 0m14.765s 00:41:08.762 16:06:29 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:08.762 16:06:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:41:08.762 ************************************ 00:41:08.762 END TEST ftl_trim 00:41:08.762 ************************************ 00:41:08.762 16:06:30 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:41:08.762 16:06:30 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:41:08.762 16:06:30 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:08.762 16:06:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:08.762 ************************************ 00:41:08.762 START TEST ftl_restore 00:41:08.762 ************************************ 00:41:08.762 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:41:08.762 * Looking for test storage... 00:41:08.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:41:08.762 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:08.762 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:41:08.762 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:09.023 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:09.023 16:06:30 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:41:09.023 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:09.023 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:09.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.023 --rc genhtml_branch_coverage=1 00:41:09.023 --rc genhtml_function_coverage=1 00:41:09.023 --rc genhtml_legend=1 00:41:09.023 --rc geninfo_all_blocks=1 00:41:09.023 --rc geninfo_unexecuted_blocks=1 00:41:09.023 00:41:09.023 ' 00:41:09.023 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:09.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.023 --rc genhtml_branch_coverage=1 00:41:09.023 --rc genhtml_function_coverage=1 00:41:09.023 --rc genhtml_legend=1 00:41:09.023 --rc geninfo_all_blocks=1 00:41:09.023 --rc geninfo_unexecuted_blocks=1 00:41:09.023 00:41:09.023 ' 00:41:09.023 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:09.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.023 --rc genhtml_branch_coverage=1 00:41:09.023 --rc genhtml_function_coverage=1 00:41:09.023 --rc genhtml_legend=1 00:41:09.023 --rc geninfo_all_blocks=1 00:41:09.023 --rc geninfo_unexecuted_blocks=1 00:41:09.023 00:41:09.023 ' 00:41:09.023 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:09.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.023 --rc genhtml_branch_coverage=1 00:41:09.023 --rc genhtml_function_coverage=1 00:41:09.023 --rc genhtml_legend=1 00:41:09.023 --rc geninfo_all_blocks=1 00:41:09.023 --rc geninfo_unexecuted_blocks=1 00:41:09.023 00:41:09.023 ' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.KaynSA0JLI 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:41:09.023 16:06:30 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:41:09.024 16:06:30 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:41:09.024 16:06:30 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:41:09.024 16:06:30 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74402 00:41:09.024 16:06:30 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74402 00:41:09.024 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74402 ']' 00:41:09.024 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:09.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:09.024 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:09.024 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:09.024 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:09.024 16:06:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:41:09.024 16:06:30 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:09.024 [2024-11-05 16:06:30.294195] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:41:09.024 [2024-11-05 16:06:30.294379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74402 ] 00:41:09.285 [2024-11-05 16:06:30.458322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.285 [2024-11-05 16:06:30.588120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:41:10.228 16:06:31 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:41:10.228 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:41:10.488 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:10.488 { 00:41:10.488 "name": "nvme0n1", 00:41:10.488 "aliases": [ 00:41:10.488 "d92676a9-90ca-4dcd-928a-26aac70c35fb" 00:41:10.488 ], 00:41:10.488 "product_name": "NVMe disk", 00:41:10.488 "block_size": 4096, 00:41:10.488 "num_blocks": 1310720, 00:41:10.488 "uuid": "d92676a9-90ca-4dcd-928a-26aac70c35fb", 00:41:10.488 "numa_id": -1, 00:41:10.488 "assigned_rate_limits": { 00:41:10.488 "rw_ios_per_sec": 0, 00:41:10.488 "rw_mbytes_per_sec": 0, 00:41:10.488 "r_mbytes_per_sec": 0, 00:41:10.488 "w_mbytes_per_sec": 0 00:41:10.488 }, 00:41:10.488 "claimed": true, 00:41:10.488 "claim_type": "read_many_write_one", 00:41:10.488 "zoned": false, 00:41:10.488 "supported_io_types": { 00:41:10.488 "read": true, 00:41:10.488 "write": true, 00:41:10.488 "unmap": true, 00:41:10.489 "flush": true, 00:41:10.489 "reset": true, 00:41:10.489 "nvme_admin": true, 00:41:10.489 "nvme_io": true, 00:41:10.489 "nvme_io_md": false, 00:41:10.489 "write_zeroes": true, 00:41:10.489 "zcopy": false, 00:41:10.489 "get_zone_info": false, 00:41:10.489 "zone_management": false, 00:41:10.489 "zone_append": false, 00:41:10.489 "compare": true, 00:41:10.489 "compare_and_write": false, 00:41:10.489 "abort": true, 00:41:10.489 "seek_hole": false, 00:41:10.489 "seek_data": false, 00:41:10.489 "copy": true, 00:41:10.489 "nvme_iov_md": false 00:41:10.489 }, 00:41:10.489 "driver_specific": { 00:41:10.489 "nvme": [ 00:41:10.489 { 00:41:10.489 "pci_address": "0000:00:11.0", 00:41:10.489 "trid": { 00:41:10.489 "trtype": "PCIe", 00:41:10.489 "traddr": "0000:00:11.0" 00:41:10.489 }, 00:41:10.489 "ctrlr_data": { 00:41:10.489 "cntlid": 0, 00:41:10.489 "vendor_id": "0x1b36", 00:41:10.489 "model_number": "QEMU NVMe Ctrl", 00:41:10.489 "serial_number": "12341", 00:41:10.489 "firmware_revision": "8.0.0", 00:41:10.489 "subnqn": "nqn.2019-08.org.qemu:12341", 00:41:10.489 "oacs": { 00:41:10.489 "security": 0, 00:41:10.489 "format": 1, 00:41:10.489 "firmware": 0, 00:41:10.489 "ns_manage": 1 00:41:10.489 }, 00:41:10.489 "multi_ctrlr": false, 00:41:10.489 "ana_reporting": false 00:41:10.489 }, 00:41:10.489 "vs": { 00:41:10.489 "nvme_version": "1.4" 00:41:10.489 }, 00:41:10.489 "ns_data": { 00:41:10.489 "id": 1, 00:41:10.489 "can_share": false 00:41:10.489 } 00:41:10.489 } 00:41:10.489 ], 00:41:10.489 "mp_policy": "active_passive" 00:41:10.489 } 00:41:10.489 } 00:41:10.489 ]' 00:41:10.489 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:10.489 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:41:10.489 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:10.750 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:41:10.750 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:41:10.750 16:06:31 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:41:10.750 16:06:31 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:41:10.750 16:06:31 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:41:10.750 16:06:31 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:41:10.750 16:06:31 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:10.750 16:06:31 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:10.750 16:06:32 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5de44d8f-b151-473f-9339-a2e3219ee37e 00:41:10.750 16:06:32 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:41:10.750 16:06:32 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5de44d8f-b151-473f-9339-a2e3219ee37e 00:41:11.011 16:06:32 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:41:11.272 16:06:32 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=39f4887c-03a0-4829-aa32-cc5dcac30a86 00:41:11.272 16:06:32 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 39f4887c-03a0-4829-aa32-cc5dcac30a86 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:11.533 16:06:32 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:41:11.534 16:06:32 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:11.534 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:11.534 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:11.534 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:41:11.534 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:41:11.534 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:11.795 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:11.795 { 00:41:11.795 "name": "112c31d4-df86-424f-8b40-0a324c2d1db7", 00:41:11.795 "aliases": [ 00:41:11.795 "lvs/nvme0n1p0" 00:41:11.795 ], 00:41:11.795 "product_name": "Logical Volume", 00:41:11.795 "block_size": 4096, 00:41:11.795 "num_blocks": 26476544, 00:41:11.795 "uuid": "112c31d4-df86-424f-8b40-0a324c2d1db7", 00:41:11.795 "assigned_rate_limits": { 00:41:11.795 "rw_ios_per_sec": 0, 00:41:11.795 "rw_mbytes_per_sec": 0, 00:41:11.795 "r_mbytes_per_sec": 0, 00:41:11.795 "w_mbytes_per_sec": 0 00:41:11.795 }, 00:41:11.795 "claimed": false, 00:41:11.795 "zoned": false, 00:41:11.795 "supported_io_types": { 00:41:11.795 "read": true, 00:41:11.795 "write": true, 00:41:11.795 "unmap": true, 00:41:11.795 "flush": false, 00:41:11.795 "reset": true, 00:41:11.795 "nvme_admin": false, 00:41:11.795 "nvme_io": false, 00:41:11.795 "nvme_io_md": false, 00:41:11.795 "write_zeroes": true, 00:41:11.795 "zcopy": false, 00:41:11.795 "get_zone_info": false, 00:41:11.795 "zone_management": false, 00:41:11.795 "zone_append": false, 00:41:11.795 "compare": false, 00:41:11.795 "compare_and_write": false, 00:41:11.795 "abort": false, 00:41:11.795 "seek_hole": true, 00:41:11.795 "seek_data": true, 00:41:11.795 "copy": false, 00:41:11.795 "nvme_iov_md": false 00:41:11.795 }, 00:41:11.795 "driver_specific": { 00:41:11.795 "lvol": { 00:41:11.795 "lvol_store_uuid": "39f4887c-03a0-4829-aa32-cc5dcac30a86", 00:41:11.795 "base_bdev": "nvme0n1", 00:41:11.795 "thin_provision": true, 00:41:11.795 "num_allocated_clusters": 0, 00:41:11.795 "snapshot": false, 00:41:11.795 "clone": false, 00:41:11.795 "esnap_clone": false 00:41:11.795 } 00:41:11.795 } 00:41:11.795 } 00:41:11.795 ]' 00:41:11.795 16:06:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:11.795 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:41:11.795 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:11.795 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:41:11.795 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:41:11.795 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:41:11.795 16:06:33 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:41:11.795 16:06:33 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:41:11.795 16:06:33 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:41:12.054 16:06:33 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:41:12.054 16:06:33 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:41:12.054 16:06:33 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:12.054 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:12.054 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:12.054 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:41:12.054 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:41:12.054 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:12.314 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:12.314 { 00:41:12.314 "name": "112c31d4-df86-424f-8b40-0a324c2d1db7", 00:41:12.314 "aliases": [ 00:41:12.314 "lvs/nvme0n1p0" 00:41:12.314 ], 00:41:12.314 "product_name": "Logical Volume", 00:41:12.314 "block_size": 4096, 00:41:12.314 "num_blocks": 26476544, 00:41:12.314 "uuid": "112c31d4-df86-424f-8b40-0a324c2d1db7", 00:41:12.314 "assigned_rate_limits": { 00:41:12.314 "rw_ios_per_sec": 0, 00:41:12.314 "rw_mbytes_per_sec": 0, 00:41:12.314 "r_mbytes_per_sec": 0, 00:41:12.314 "w_mbytes_per_sec": 0 00:41:12.314 }, 00:41:12.314 "claimed": false, 00:41:12.314 "zoned": false, 00:41:12.314 "supported_io_types": { 00:41:12.314 "read": true, 00:41:12.314 "write": true, 00:41:12.314 "unmap": true, 00:41:12.314 "flush": false, 00:41:12.314 "reset": true, 00:41:12.314 "nvme_admin": false, 00:41:12.314 "nvme_io": false, 00:41:12.314 "nvme_io_md": false, 00:41:12.314 "write_zeroes": true, 00:41:12.314 "zcopy": false, 00:41:12.314 "get_zone_info": false, 00:41:12.314 "zone_management": false, 00:41:12.314 "zone_append": false, 00:41:12.314 "compare": false, 00:41:12.314 "compare_and_write": false, 00:41:12.315 "abort": false, 00:41:12.315 "seek_hole": true, 00:41:12.315 "seek_data": true, 00:41:12.315 "copy": false, 00:41:12.315 "nvme_iov_md": false 00:41:12.315 }, 00:41:12.315 "driver_specific": { 00:41:12.315 "lvol": { 00:41:12.315 "lvol_store_uuid": "39f4887c-03a0-4829-aa32-cc5dcac30a86", 00:41:12.315 "base_bdev": "nvme0n1", 00:41:12.315 "thin_provision": true, 00:41:12.315 "num_allocated_clusters": 0, 00:41:12.315 "snapshot": false, 00:41:12.315 "clone": false, 00:41:12.315 "esnap_clone": false 00:41:12.315 } 00:41:12.315 } 00:41:12.315 } 00:41:12.315 ]' 00:41:12.315 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:12.315 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:41:12.315 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:12.315 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:41:12.315 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:41:12.315 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:41:12.315 16:06:33 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:41:12.315 16:06:33 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:41:12.573 16:06:33 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:41:12.573 16:06:33 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:12.573 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:12.573 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:12.573 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:41:12.573 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:41:12.573 16:06:33 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 112c31d4-df86-424f-8b40-0a324c2d1db7 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:12.831 { 00:41:12.831 "name": "112c31d4-df86-424f-8b40-0a324c2d1db7", 00:41:12.831 "aliases": [ 00:41:12.831 "lvs/nvme0n1p0" 00:41:12.831 ], 00:41:12.831 "product_name": "Logical Volume", 00:41:12.831 "block_size": 4096, 00:41:12.831 "num_blocks": 26476544, 00:41:12.831 "uuid": "112c31d4-df86-424f-8b40-0a324c2d1db7", 00:41:12.831 "assigned_rate_limits": { 00:41:12.831 "rw_ios_per_sec": 0, 00:41:12.831 "rw_mbytes_per_sec": 0, 00:41:12.831 "r_mbytes_per_sec": 0, 00:41:12.831 "w_mbytes_per_sec": 0 00:41:12.831 }, 00:41:12.831 "claimed": false, 00:41:12.831 "zoned": false, 00:41:12.831 "supported_io_types": { 00:41:12.831 "read": true, 00:41:12.831 "write": true, 00:41:12.831 "unmap": true, 00:41:12.831 "flush": false, 00:41:12.831 "reset": true, 00:41:12.831 "nvme_admin": false, 00:41:12.831 "nvme_io": false, 00:41:12.831 "nvme_io_md": false, 00:41:12.831 "write_zeroes": true, 00:41:12.831 "zcopy": false, 00:41:12.831 "get_zone_info": false, 00:41:12.831 "zone_management": false, 00:41:12.831 "zone_append": false, 00:41:12.831 "compare": false, 00:41:12.831 "compare_and_write": false, 00:41:12.831 "abort": false, 00:41:12.831 "seek_hole": true, 00:41:12.831 "seek_data": true, 00:41:12.831 "copy": false, 00:41:12.831 "nvme_iov_md": false 00:41:12.831 }, 00:41:12.831 "driver_specific": { 00:41:12.831 "lvol": { 00:41:12.831 "lvol_store_uuid": "39f4887c-03a0-4829-aa32-cc5dcac30a86", 00:41:12.831 "base_bdev": "nvme0n1", 00:41:12.831 "thin_provision": true, 00:41:12.831 "num_allocated_clusters": 0, 00:41:12.831 "snapshot": false, 00:41:12.831 "clone": false, 00:41:12.831 "esnap_clone": false 00:41:12.831 } 00:41:12.831 } 00:41:12.831 } 00:41:12.831 ]' 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:41:12.831 16:06:34 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 112c31d4-df86-424f-8b40-0a324c2d1db7 --l2p_dram_limit 10' 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:41:12.831 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:41:12.831 16:06:34 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 112c31d4-df86-424f-8b40-0a324c2d1db7 --l2p_dram_limit 10 -c nvc0n1p0 00:41:13.092 [2024-11-05 16:06:34.250884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.092 [2024-11-05 16:06:34.250923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:13.092 [2024-11-05 16:06:34.250936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:13.092 [2024-11-05 16:06:34.250942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.092 [2024-11-05 16:06:34.250990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.092 [2024-11-05 16:06:34.250998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:13.092 [2024-11-05 16:06:34.251005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:41:13.092 [2024-11-05 16:06:34.251011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.092 [2024-11-05 16:06:34.251030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:13.093 [2024-11-05 16:06:34.251630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:13.093 [2024-11-05 16:06:34.251653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.251659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:13.093 [2024-11-05 16:06:34.251667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:41:13.093 [2024-11-05 16:06:34.251672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.251725] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49f3c030-3db9-4227-a747-0995db4fc140 00:41:13.093 [2024-11-05 16:06:34.252658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.252683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:41:13.093 [2024-11-05 16:06:34.252691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:41:13.093 [2024-11-05 16:06:34.252699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.257374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.257402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:13.093 [2024-11-05 16:06:34.257411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:41:13.093 [2024-11-05 16:06:34.257418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.257485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.257494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:13.093 [2024-11-05 16:06:34.257501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:41:13.093 [2024-11-05 16:06:34.257510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.257544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.257553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:13.093 [2024-11-05 16:06:34.257560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:13.093 [2024-11-05 16:06:34.257569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.257585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:13.093 [2024-11-05 16:06:34.260443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.260470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:13.093 [2024-11-05 16:06:34.260480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:41:13.093 [2024-11-05 16:06:34.260486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.260512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.260518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:13.093 [2024-11-05 16:06:34.260525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:41:13.093 [2024-11-05 16:06:34.260531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.260550] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:41:13.093 [2024-11-05 16:06:34.260653] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:13.093 [2024-11-05 16:06:34.260665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:13.093 [2024-11-05 16:06:34.260673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:13.093 [2024-11-05 16:06:34.260682] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:13.093 [2024-11-05 16:06:34.260689] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:13.093 [2024-11-05 16:06:34.260696] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:13.093 [2024-11-05 16:06:34.260702] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:13.093 [2024-11-05 16:06:34.260710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:13.093 [2024-11-05 16:06:34.260716] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:13.093 [2024-11-05 16:06:34.260723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.260729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:13.093 [2024-11-05 16:06:34.260745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:41:13.093 [2024-11-05 16:06:34.260757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.260823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.093 [2024-11-05 16:06:34.260830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:13.093 [2024-11-05 16:06:34.260837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:41:13.093 [2024-11-05 16:06:34.260842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.093 [2024-11-05 16:06:34.260920] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:13.093 [2024-11-05 16:06:34.260934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:13.093 [2024-11-05 16:06:34.260941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:13.093 [2024-11-05 16:06:34.260947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.260955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:13.093 [2024-11-05 16:06:34.260960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.260967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:13.093 [2024-11-05 16:06:34.260973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:13.093 [2024-11-05 16:06:34.260979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:13.093 [2024-11-05 16:06:34.260984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:13.093 [2024-11-05 16:06:34.260991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:13.093 [2024-11-05 16:06:34.260996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:13.093 [2024-11-05 16:06:34.261002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:13.093 [2024-11-05 16:06:34.261007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:13.093 [2024-11-05 16:06:34.261013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:13.093 [2024-11-05 16:06:34.261020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:13.093 [2024-11-05 16:06:34.261033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:13.093 [2024-11-05 16:06:34.261053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:13.093 [2024-11-05 16:06:34.261069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:13.093 [2024-11-05 16:06:34.261087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:13.093 [2024-11-05 16:06:34.261103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:13.093 [2024-11-05 16:06:34.261122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:13.093 [2024-11-05 16:06:34.261133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:13.093 [2024-11-05 16:06:34.261138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:13.093 [2024-11-05 16:06:34.261143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:13.093 [2024-11-05 16:06:34.261148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:13.093 [2024-11-05 16:06:34.261154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:13.093 [2024-11-05 16:06:34.261160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:13.093 [2024-11-05 16:06:34.261171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:13.093 [2024-11-05 16:06:34.261177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261182] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:13.093 [2024-11-05 16:06:34.261189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:13.093 [2024-11-05 16:06:34.261194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:13.093 [2024-11-05 16:06:34.261209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:13.093 [2024-11-05 16:06:34.261217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:13.093 [2024-11-05 16:06:34.261222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:13.093 [2024-11-05 16:06:34.261229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:13.093 [2024-11-05 16:06:34.261233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:13.093 [2024-11-05 16:06:34.261240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:13.093 [2024-11-05 16:06:34.261247] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:13.093 [2024-11-05 16:06:34.261256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:13.094 [2024-11-05 16:06:34.261270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:13.094 [2024-11-05 16:06:34.261276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:13.094 [2024-11-05 16:06:34.261283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:13.094 [2024-11-05 16:06:34.261288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:13.094 [2024-11-05 16:06:34.261294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:13.094 [2024-11-05 16:06:34.261300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:13.094 [2024-11-05 16:06:34.261307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:13.094 [2024-11-05 16:06:34.261312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:13.094 [2024-11-05 16:06:34.261319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:13.094 [2024-11-05 16:06:34.261350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:13.094 [2024-11-05 16:06:34.261357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:13.094 [2024-11-05 16:06:34.261370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:13.094 [2024-11-05 16:06:34.261376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:13.094 [2024-11-05 16:06:34.261383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:13.094 [2024-11-05 16:06:34.261388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:13.094 [2024-11-05 16:06:34.261395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:13.094 [2024-11-05 16:06:34.261400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:41:13.094 [2024-11-05 16:06:34.261407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:13.094 [2024-11-05 16:06:34.261436] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:41:13.094 [2024-11-05 16:06:34.261446] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:41:17.299 [2024-11-05 16:06:37.940126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:37.940219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:41:17.299 [2024-11-05 16:06:37.940237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3678.674 ms 00:41:17.299 [2024-11-05 16:06:37.940249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:37.971652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:37.971721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:17.299 [2024-11-05 16:06:37.971752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.156 ms 00:41:17.299 [2024-11-05 16:06:37.971764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:37.971928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:37.971943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:17.299 [2024-11-05 16:06:37.971953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:41:17.299 [2024-11-05 16:06:37.971966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.007031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.007085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:17.299 [2024-11-05 16:06:38.007097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.022 ms 00:41:17.299 [2024-11-05 16:06:38.007108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.007143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.007162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:17.299 [2024-11-05 16:06:38.007171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:17.299 [2024-11-05 16:06:38.007182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.007719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.007774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:17.299 [2024-11-05 16:06:38.007785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:41:17.299 [2024-11-05 16:06:38.007797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.007911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.007924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:17.299 [2024-11-05 16:06:38.007936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:41:17.299 [2024-11-05 16:06:38.007950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.025776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.025828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:17.299 [2024-11-05 16:06:38.025839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.805 ms 00:41:17.299 [2024-11-05 16:06:38.025850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.039194] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:17.299 [2024-11-05 16:06:38.042939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.042981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:17.299 [2024-11-05 16:06:38.042994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.001 ms 00:41:17.299 [2024-11-05 16:06:38.043003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.299 [2024-11-05 16:06:38.156201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.299 [2024-11-05 16:06:38.156267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:41:17.300 [2024-11-05 16:06:38.156287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.163 ms 00:41:17.300 [2024-11-05 16:06:38.156297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.156506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.156524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:17.300 [2024-11-05 16:06:38.156539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:41:17.300 [2024-11-05 16:06:38.156549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.182920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.182973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:41:17.300 [2024-11-05 16:06:38.182989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.315 ms 00:41:17.300 [2024-11-05 16:06:38.182998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.207838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.207889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:41:17.300 [2024-11-05 16:06:38.207904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.786 ms 00:41:17.300 [2024-11-05 16:06:38.207911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.208520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.208547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:17.300 [2024-11-05 16:06:38.208559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:41:17.300 [2024-11-05 16:06:38.208567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.293788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.293844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:41:17.300 [2024-11-05 16:06:38.293863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.172 ms 00:41:17.300 [2024-11-05 16:06:38.293873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.321112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.321167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:41:17.300 [2024-11-05 16:06:38.321183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.144 ms 00:41:17.300 [2024-11-05 16:06:38.321191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.347178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.347227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:41:17.300 [2024-11-05 16:06:38.347243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.931 ms 00:41:17.300 [2024-11-05 16:06:38.347250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.373903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.373953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:17.300 [2024-11-05 16:06:38.373968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.599 ms 00:41:17.300 [2024-11-05 16:06:38.373975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.374030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.374040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:17.300 [2024-11-05 16:06:38.374054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:17.300 [2024-11-05 16:06:38.374062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.374155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.300 [2024-11-05 16:06:38.374167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:17.300 [2024-11-05 16:06:38.374181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:41:17.300 [2024-11-05 16:06:38.374189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.300 [2024-11-05 16:06:38.375672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4124.272 ms, result 0 00:41:17.300 { 00:41:17.300 "name": "ftl0", 00:41:17.300 "uuid": "49f3c030-3db9-4227-a747-0995db4fc140" 00:41:17.300 } 00:41:17.300 16:06:38 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:41:17.300 16:06:38 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:41:17.300 16:06:38 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:41:17.300 16:06:38 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:41:17.562 [2024-11-05 16:06:38.830796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.830856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:17.562 [2024-11-05 16:06:38.830870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:17.562 [2024-11-05 16:06:38.830887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.830911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:17.562 [2024-11-05 16:06:38.833854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.833895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:17.562 [2024-11-05 16:06:38.833909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.921 ms 00:41:17.562 [2024-11-05 16:06:38.833919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.834188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.834200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:17.562 [2024-11-05 16:06:38.834215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:41:17.562 [2024-11-05 16:06:38.834223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.837491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.837516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:17.562 [2024-11-05 16:06:38.837529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:41:17.562 [2024-11-05 16:06:38.837539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.843726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.843776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:17.562 [2024-11-05 16:06:38.843793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:41:17.562 [2024-11-05 16:06:38.843801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.869779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.869826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:17.562 [2024-11-05 16:06:38.869841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.898 ms 00:41:17.562 [2024-11-05 16:06:38.869849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.887517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.887581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:17.562 [2024-11-05 16:06:38.887597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.611 ms 00:41:17.562 [2024-11-05 16:06:38.887605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.887789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.887804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:17.562 [2024-11-05 16:06:38.887816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:41:17.562 [2024-11-05 16:06:38.887825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.562 [2024-11-05 16:06:38.913188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.562 [2024-11-05 16:06:38.913235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:17.562 [2024-11-05 16:06:38.913248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.338 ms 00:41:17.562 [2024-11-05 16:06:38.913256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.825 [2024-11-05 16:06:38.938225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.825 [2024-11-05 16:06:38.938271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:17.825 [2024-11-05 16:06:38.938284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.916 ms 00:41:17.825 [2024-11-05 16:06:38.938302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.825 [2024-11-05 16:06:38.962929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.825 [2024-11-05 16:06:38.962976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:17.825 [2024-11-05 16:06:38.962989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.571 ms 00:41:17.825 [2024-11-05 16:06:38.962995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.825 [2024-11-05 16:06:38.987029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.825 [2024-11-05 16:06:38.987076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:17.825 [2024-11-05 16:06:38.987089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.930 ms 00:41:17.825 [2024-11-05 16:06:38.987096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.825 [2024-11-05 16:06:38.987146] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:17.825 [2024-11-05 16:06:38.987160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:17.825 [2024-11-05 16:06:38.987826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.987997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:17.826 [2024-11-05 16:06:38.988113] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:17.826 [2024-11-05 16:06:38.988126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f3c030-3db9-4227-a747-0995db4fc140 00:41:17.826 [2024-11-05 16:06:38.988136] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:17.826 [2024-11-05 16:06:38.988149] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:17.826 [2024-11-05 16:06:38.988156] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:17.826 [2024-11-05 16:06:38.988168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:17.826 [2024-11-05 16:06:38.988176] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:17.826 [2024-11-05 16:06:38.988186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:17.826 [2024-11-05 16:06:38.988194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:17.826 [2024-11-05 16:06:38.988203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:17.826 [2024-11-05 16:06:38.988209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:17.826 [2024-11-05 16:06:38.988219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.826 [2024-11-05 16:06:38.988226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:17.826 [2024-11-05 16:06:38.988237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:41:17.826 [2024-11-05 16:06:38.988245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.001790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.826 [2024-11-05 16:06:39.001830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:17.826 [2024-11-05 16:06:39.001844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.498 ms 00:41:17.826 [2024-11-05 16:06:39.001851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.002249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:17.826 [2024-11-05 16:06:39.002415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:17.826 [2024-11-05 16:06:39.002436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:41:17.826 [2024-11-05 16:06:39.002448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.049185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:17.826 [2024-11-05 16:06:39.049234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:17.826 [2024-11-05 16:06:39.049249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:17.826 [2024-11-05 16:06:39.049258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.049330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:17.826 [2024-11-05 16:06:39.049339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:17.826 [2024-11-05 16:06:39.049350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:17.826 [2024-11-05 16:06:39.049361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.049448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:17.826 [2024-11-05 16:06:39.049460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:17.826 [2024-11-05 16:06:39.049470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:17.826 [2024-11-05 16:06:39.049478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.049500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:17.826 [2024-11-05 16:06:39.049508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:17.826 [2024-11-05 16:06:39.049519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:17.826 [2024-11-05 16:06:39.049527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:17.826 [2024-11-05 16:06:39.133987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:17.826 [2024-11-05 16:06:39.134042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:17.826 [2024-11-05 16:06:39.134058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:17.826 [2024-11-05 16:06:39.134066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.203370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.203428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:18.088 [2024-11-05 16:06:39.203443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.088 [2024-11-05 16:06:39.203455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.203568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.203580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:18.088 [2024-11-05 16:06:39.203591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.088 [2024-11-05 16:06:39.203599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.203655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.203668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:18.088 [2024-11-05 16:06:39.203679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.088 [2024-11-05 16:06:39.203687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.203825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.203837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:18.088 [2024-11-05 16:06:39.203848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.088 [2024-11-05 16:06:39.203856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.203893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.203902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:18.088 [2024-11-05 16:06:39.203912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.088 [2024-11-05 16:06:39.203919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.203962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.203975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:18.088 [2024-11-05 16:06:39.203985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.088 [2024-11-05 16:06:39.203993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.088 [2024-11-05 16:06:39.204045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:18.088 [2024-11-05 16:06:39.204058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:18.089 [2024-11-05 16:06:39.204070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:18.089 [2024-11-05 16:06:39.204078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:18.089 [2024-11-05 16:06:39.204226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.385 ms, result 0 00:41:18.089 true 00:41:18.089 16:06:39 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74402 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74402 ']' 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74402 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74402 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:18.089 killing process with pid 74402 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74402' 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74402 00:41:18.089 16:06:39 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74402 00:41:24.674 16:06:45 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:41:27.959 262144+0 records in 00:41:27.959 262144+0 records out 00:41:27.959 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62675 s, 296 MB/s 00:41:27.959 16:06:48 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:41:29.334 16:06:50 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:29.334 [2024-11-05 16:06:50.478969] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:41:29.334 [2024-11-05 16:06:50.479140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74628 ] 00:41:29.334 [2024-11-05 16:06:50.644151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.595 [2024-11-05 16:06:50.750985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.855 [2024-11-05 16:06:51.038673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:29.856 [2024-11-05 16:06:51.038776] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:29.856 [2024-11-05 16:06:51.199166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.856 [2024-11-05 16:06:51.199226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:29.856 [2024-11-05 16:06:51.199248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:29.856 [2024-11-05 16:06:51.199257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.856 [2024-11-05 16:06:51.199310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.856 [2024-11-05 16:06:51.199321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:29.856 [2024-11-05 16:06:51.199333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:41:29.856 [2024-11-05 16:06:51.199341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.856 [2024-11-05 16:06:51.199363] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:29.856 [2024-11-05 16:06:51.200187] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:29.856 [2024-11-05 16:06:51.200221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.856 [2024-11-05 16:06:51.200230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:29.856 [2024-11-05 16:06:51.200240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:41:29.856 [2024-11-05 16:06:51.200247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.856 [2024-11-05 16:06:51.202029] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:29.856 [2024-11-05 16:06:51.216023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.856 [2024-11-05 16:06:51.216076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:29.856 [2024-11-05 16:06:51.216090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.996 ms 00:41:29.856 [2024-11-05 16:06:51.216099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:29.856 [2024-11-05 16:06:51.216183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:29.856 [2024-11-05 16:06:51.216194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:29.856 [2024-11-05 16:06:51.216203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:41:29.856 [2024-11-05 16:06:51.216210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.119 [2024-11-05 16:06:51.224590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.119 [2024-11-05 16:06:51.224637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:30.119 [2024-11-05 16:06:51.224647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.299 ms 00:41:30.119 [2024-11-05 16:06:51.224656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.119 [2024-11-05 16:06:51.224766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.119 [2024-11-05 16:06:51.224776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:30.119 [2024-11-05 16:06:51.224785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:41:30.119 [2024-11-05 16:06:51.224794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.119 [2024-11-05 16:06:51.224840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.119 [2024-11-05 16:06:51.224850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:30.119 [2024-11-05 16:06:51.224860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:41:30.119 [2024-11-05 16:06:51.224867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.119 [2024-11-05 16:06:51.224890] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:30.119 [2024-11-05 16:06:51.229079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.119 [2024-11-05 16:06:51.229119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:30.119 [2024-11-05 16:06:51.229130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:41:30.119 [2024-11-05 16:06:51.229141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.119 [2024-11-05 16:06:51.229177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.119 [2024-11-05 16:06:51.229186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:30.119 [2024-11-05 16:06:51.229195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:41:30.119 [2024-11-05 16:06:51.229203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.119 [2024-11-05 16:06:51.229261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:30.119 [2024-11-05 16:06:51.229285] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:30.119 [2024-11-05 16:06:51.229328] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:30.119 [2024-11-05 16:06:51.229347] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:30.119 [2024-11-05 16:06:51.229454] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:30.119 [2024-11-05 16:06:51.229466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:30.119 [2024-11-05 16:06:51.229477] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:30.119 [2024-11-05 16:06:51.229488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:30.119 [2024-11-05 16:06:51.229496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:30.120 [2024-11-05 16:06:51.229505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:30.120 [2024-11-05 16:06:51.229513] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:30.120 [2024-11-05 16:06:51.229522] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:30.120 [2024-11-05 16:06:51.229529] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:30.120 [2024-11-05 16:06:51.229542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.120 [2024-11-05 16:06:51.229549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:30.120 [2024-11-05 16:06:51.229558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:41:30.120 [2024-11-05 16:06:51.229565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.120 [2024-11-05 16:06:51.229650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.120 [2024-11-05 16:06:51.229659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:30.120 [2024-11-05 16:06:51.229667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:41:30.120 [2024-11-05 16:06:51.229676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.120 [2024-11-05 16:06:51.229798] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:30.120 [2024-11-05 16:06:51.229813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:30.120 [2024-11-05 16:06:51.229821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:30.120 [2024-11-05 16:06:51.229830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:30.120 [2024-11-05 16:06:51.229847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:30.120 [2024-11-05 16:06:51.229861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:30.120 [2024-11-05 16:06:51.229869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:30.120 [2024-11-05 16:06:51.229884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:30.120 [2024-11-05 16:06:51.229892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:30.120 [2024-11-05 16:06:51.229898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:30.120 [2024-11-05 16:06:51.229908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:30.120 [2024-11-05 16:06:51.229915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:30.120 [2024-11-05 16:06:51.229929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:30.120 [2024-11-05 16:06:51.229943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:30.120 [2024-11-05 16:06:51.229950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:30.120 [2024-11-05 16:06:51.229965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:30.120 [2024-11-05 16:06:51.229978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:30.120 [2024-11-05 16:06:51.229985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:30.120 [2024-11-05 16:06:51.229991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:30.120 [2024-11-05 16:06:51.229998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:30.120 [2024-11-05 16:06:51.230005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:30.120 [2024-11-05 16:06:51.230012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:30.120 [2024-11-05 16:06:51.230018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:30.120 [2024-11-05 16:06:51.230025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:30.120 [2024-11-05 16:06:51.230033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:30.120 [2024-11-05 16:06:51.230041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:30.120 [2024-11-05 16:06:51.230047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:30.120 [2024-11-05 16:06:51.230053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:30.120 [2024-11-05 16:06:51.230061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:30.120 [2024-11-05 16:06:51.230067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:30.120 [2024-11-05 16:06:51.230074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:30.120 [2024-11-05 16:06:51.230081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:30.120 [2024-11-05 16:06:51.230088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:30.120 [2024-11-05 16:06:51.230094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.230101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:30.120 [2024-11-05 16:06:51.230108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:30.120 [2024-11-05 16:06:51.230114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.230121] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:30.120 [2024-11-05 16:06:51.230129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:30.120 [2024-11-05 16:06:51.230139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:30.120 [2024-11-05 16:06:51.230147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:30.120 [2024-11-05 16:06:51.230155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:30.120 [2024-11-05 16:06:51.230164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:30.120 [2024-11-05 16:06:51.230171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:30.120 [2024-11-05 16:06:51.230180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:30.120 [2024-11-05 16:06:51.230187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:30.120 [2024-11-05 16:06:51.230195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:30.120 [2024-11-05 16:06:51.230204] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:30.120 [2024-11-05 16:06:51.230213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:30.120 [2024-11-05 16:06:51.230230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:30.120 [2024-11-05 16:06:51.230239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:30.120 [2024-11-05 16:06:51.230246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:30.120 [2024-11-05 16:06:51.230253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:30.120 [2024-11-05 16:06:51.230260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:30.120 [2024-11-05 16:06:51.230269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:30.120 [2024-11-05 16:06:51.230277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:30.120 [2024-11-05 16:06:51.230315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:30.120 [2024-11-05 16:06:51.230325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:30.120 [2024-11-05 16:06:51.230363] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:30.120 [2024-11-05 16:06:51.230376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:30.120 [2024-11-05 16:06:51.230393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:30.120 [2024-11-05 16:06:51.230401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:30.120 [2024-11-05 16:06:51.230409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:30.120 [2024-11-05 16:06:51.230417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.120 [2024-11-05 16:06:51.230424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:30.120 [2024-11-05 16:06:51.230433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:41:30.120 [2024-11-05 16:06:51.230442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.120 [2024-11-05 16:06:51.262658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.120 [2024-11-05 16:06:51.262708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:30.120 [2024-11-05 16:06:51.262721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.170 ms 00:41:30.120 [2024-11-05 16:06:51.262729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.120 [2024-11-05 16:06:51.262850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.120 [2024-11-05 16:06:51.262860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:30.120 [2024-11-05 16:06:51.262870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:41:30.120 [2024-11-05 16:06:51.262879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.120 [2024-11-05 16:06:51.311026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.311081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:30.121 [2024-11-05 16:06:51.311094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.088 ms 00:41:30.121 [2024-11-05 16:06:51.311103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.311157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.311167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:30.121 [2024-11-05 16:06:51.311177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:30.121 [2024-11-05 16:06:51.311189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.311835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.311877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:30.121 [2024-11-05 16:06:51.311889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:41:30.121 [2024-11-05 16:06:51.311897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.312059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.312070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:30.121 [2024-11-05 16:06:51.312079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:41:30.121 [2024-11-05 16:06:51.312094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.328113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.328158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:30.121 [2024-11-05 16:06:51.328172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.997 ms 00:41:30.121 [2024-11-05 16:06:51.328180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.342604] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:41:30.121 [2024-11-05 16:06:51.342654] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:30.121 [2024-11-05 16:06:51.342669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.342678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:30.121 [2024-11-05 16:06:51.342688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.378 ms 00:41:30.121 [2024-11-05 16:06:51.342695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.368525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.368594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:30.121 [2024-11-05 16:06:51.368613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.763 ms 00:41:30.121 [2024-11-05 16:06:51.368620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.381586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.381642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:30.121 [2024-11-05 16:06:51.381653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.935 ms 00:41:30.121 [2024-11-05 16:06:51.381661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.394481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.394526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:30.121 [2024-11-05 16:06:51.394538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:41:30.121 [2024-11-05 16:06:51.394546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.395205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.395240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:30.121 [2024-11-05 16:06:51.395251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:41:30.121 [2024-11-05 16:06:51.395259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.461335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.461420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:30.121 [2024-11-05 16:06:51.461435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.052 ms 00:41:30.121 [2024-11-05 16:06:51.461451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.473117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:30.121 [2024-11-05 16:06:51.476585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.476632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:30.121 [2024-11-05 16:06:51.476645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.071 ms 00:41:30.121 [2024-11-05 16:06:51.476655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.476767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.476780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:30.121 [2024-11-05 16:06:51.476792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:41:30.121 [2024-11-05 16:06:51.476801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.476876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.476887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:30.121 [2024-11-05 16:06:51.476897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:41:30.121 [2024-11-05 16:06:51.476905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.476929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.476938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:30.121 [2024-11-05 16:06:51.476947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:30.121 [2024-11-05 16:06:51.476955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.121 [2024-11-05 16:06:51.476987] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:30.121 [2024-11-05 16:06:51.476998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.121 [2024-11-05 16:06:51.477009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:30.121 [2024-11-05 16:06:51.477018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:30.121 [2024-11-05 16:06:51.477025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.382 [2024-11-05 16:06:51.502860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.382 [2024-11-05 16:06:51.502911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:30.382 [2024-11-05 16:06:51.502925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.813 ms 00:41:30.382 [2024-11-05 16:06:51.502934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.382 [2024-11-05 16:06:51.503030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.382 [2024-11-05 16:06:51.503041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:30.382 [2024-11-05 16:06:51.503050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:41:30.382 [2024-11-05 16:06:51.503058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.382 [2024-11-05 16:06:51.504924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.232 ms, result 0 00:41:31.323  [2024-11-05T16:06:53.667Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-05T16:06:54.612Z] Copying: 36/1024 [MB] (19 MBps) [2024-11-05T16:06:55.557Z] Copying: 50/1024 [MB] (14 MBps) [2024-11-05T16:06:56.942Z] Copying: 67/1024 [MB] (17 MBps) [2024-11-05T16:06:57.515Z] Copying: 88/1024 [MB] (21 MBps) [2024-11-05T16:06:58.901Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-05T16:06:59.855Z] Copying: 118/1024 [MB] (15 MBps) [2024-11-05T16:07:00.795Z] Copying: 132/1024 [MB] (14 MBps) [2024-11-05T16:07:01.738Z] Copying: 149/1024 [MB] (17 MBps) [2024-11-05T16:07:02.682Z] Copying: 159/1024 [MB] (10 MBps) [2024-11-05T16:07:03.627Z] Copying: 172/1024 [MB] (12 MBps) [2024-11-05T16:07:04.571Z] Copying: 187/1024 [MB] (14 MBps) [2024-11-05T16:07:05.516Z] Copying: 210/1024 [MB] (23 MBps) [2024-11-05T16:07:06.903Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-05T16:07:07.849Z] Copying: 233/1024 [MB] (11 MBps) [2024-11-05T16:07:08.788Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-05T16:07:09.730Z] Copying: 261224/1048576 [kB] (10148 kBps) [2024-11-05T16:07:10.671Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-05T16:07:11.610Z] Copying: 275/1024 [MB] (10 MBps) [2024-11-05T16:07:12.546Z] Copying: 285/1024 [MB] (10 MBps) [2024-11-05T16:07:13.932Z] Copying: 317/1024 [MB] (31 MBps) [2024-11-05T16:07:14.874Z] Copying: 328/1024 [MB] (11 MBps) [2024-11-05T16:07:15.818Z] Copying: 342/1024 [MB] (13 MBps) [2024-11-05T16:07:16.763Z] Copying: 361/1024 [MB] (18 MBps) [2024-11-05T16:07:17.709Z] Copying: 379/1024 [MB] (18 MBps) [2024-11-05T16:07:18.651Z] Copying: 394/1024 [MB] (14 MBps) [2024-11-05T16:07:19.596Z] Copying: 404/1024 [MB] (10 MBps) [2024-11-05T16:07:20.539Z] Copying: 420/1024 [MB] (15 MBps) [2024-11-05T16:07:21.933Z] Copying: 432/1024 [MB] (11 MBps) [2024-11-05T16:07:22.877Z] Copying: 445/1024 [MB] (13 MBps) [2024-11-05T16:07:23.824Z] Copying: 465/1024 [MB] (20 MBps) [2024-11-05T16:07:24.767Z] Copying: 481/1024 [MB] (15 MBps) [2024-11-05T16:07:25.709Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-05T16:07:26.652Z] Copying: 509/1024 [MB] (15 MBps) [2024-11-05T16:07:27.596Z] Copying: 522/1024 [MB] (13 MBps) [2024-11-05T16:07:28.541Z] Copying: 541/1024 [MB] (19 MBps) [2024-11-05T16:07:29.925Z] Copying: 556/1024 [MB] (14 MBps) [2024-11-05T16:07:30.870Z] Copying: 569/1024 [MB] (13 MBps) [2024-11-05T16:07:31.815Z] Copying: 589/1024 [MB] (19 MBps) [2024-11-05T16:07:32.754Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-05T16:07:33.698Z] Copying: 627/1024 [MB] (27 MBps) [2024-11-05T16:07:34.642Z] Copying: 662/1024 [MB] (35 MBps) [2024-11-05T16:07:35.585Z] Copying: 680/1024 [MB] (17 MBps) [2024-11-05T16:07:36.544Z] Copying: 696/1024 [MB] (16 MBps) [2024-11-05T16:07:37.931Z] Copying: 708/1024 [MB] (11 MBps) [2024-11-05T16:07:38.874Z] Copying: 725/1024 [MB] (16 MBps) [2024-11-05T16:07:39.814Z] Copying: 746/1024 [MB] (21 MBps) [2024-11-05T16:07:40.752Z] Copying: 764/1024 [MB] (18 MBps) [2024-11-05T16:07:41.691Z] Copying: 779/1024 [MB] (14 MBps) [2024-11-05T16:07:42.635Z] Copying: 792/1024 [MB] (12 MBps) [2024-11-05T16:07:43.578Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-05T16:07:44.522Z] Copying: 824/1024 [MB] (21 MBps) [2024-11-05T16:07:45.895Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-05T16:07:46.827Z] Copying: 883/1024 [MB] (49 MBps) [2024-11-05T16:07:47.760Z] Copying: 936/1024 [MB] (52 MBps) [2024-11-05T16:07:48.693Z] Copying: 988/1024 [MB] (52 MBps) [2024-11-05T16:07:48.693Z] Copying: 1018/1024 [MB] (30 MBps) [2024-11-05T16:07:48.693Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-05 16:07:48.617991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.618024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:27.331 [2024-11-05 16:07:48.618036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:42:27.331 [2024-11-05 16:07:48.618043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.618058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:27.331 [2024-11-05 16:07:48.620210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.620235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:27.331 [2024-11-05 16:07:48.620244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:42:27.331 [2024-11-05 16:07:48.620250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.621876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.621900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:27.331 [2024-11-05 16:07:48.621907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:42:27.331 [2024-11-05 16:07:48.621913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.633868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.633896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:27.331 [2024-11-05 16:07:48.633904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.944 ms 00:42:27.331 [2024-11-05 16:07:48.633911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.638720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.638753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:27.331 [2024-11-05 16:07:48.638761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:42:27.331 [2024-11-05 16:07:48.638767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.657615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.657640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:27.331 [2024-11-05 16:07:48.657649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.812 ms 00:42:27.331 [2024-11-05 16:07:48.657655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.669325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.669351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:27.331 [2024-11-05 16:07:48.669360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.644 ms 00:42:27.331 [2024-11-05 16:07:48.669367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.669441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.669447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:27.331 [2024-11-05 16:07:48.669457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:42:27.331 [2024-11-05 16:07:48.669463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.331 [2024-11-05 16:07:48.687517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.331 [2024-11-05 16:07:48.687540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:27.331 [2024-11-05 16:07:48.687548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.043 ms 00:42:27.331 [2024-11-05 16:07:48.687554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.592 [2024-11-05 16:07:48.705329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.592 [2024-11-05 16:07:48.705353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:27.592 [2024-11-05 16:07:48.705368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.751 ms 00:42:27.592 [2024-11-05 16:07:48.705373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.592 [2024-11-05 16:07:48.722529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.592 [2024-11-05 16:07:48.722553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:27.592 [2024-11-05 16:07:48.722561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.131 ms 00:42:27.592 [2024-11-05 16:07:48.722567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.592 [2024-11-05 16:07:48.740357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.592 [2024-11-05 16:07:48.740382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:27.592 [2024-11-05 16:07:48.740389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.750 ms 00:42:27.592 [2024-11-05 16:07:48.740395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.592 [2024-11-05 16:07:48.740419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:27.592 [2024-11-05 16:07:48.740429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:27.592 [2024-11-05 16:07:48.740437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:27.592 [2024-11-05 16:07:48.740443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:27.592 [2024-11-05 16:07:48.740448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:27.592 [2024-11-05 16:07:48.740455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:27.592 [2024-11-05 16:07:48.740460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:27.593 [2024-11-05 16:07:48.740998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:27.594 [2024-11-05 16:07:48.741004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:27.594 [2024-11-05 16:07:48.741010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:27.594 [2024-11-05 16:07:48.741016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:27.594 [2024-11-05 16:07:48.741022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:27.594 [2024-11-05 16:07:48.741034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:27.594 [2024-11-05 16:07:48.741044] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f3c030-3db9-4227-a747-0995db4fc140 00:42:27.594 [2024-11-05 16:07:48.741050] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:27.594 [2024-11-05 16:07:48.741058] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:27.594 [2024-11-05 16:07:48.741063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:27.594 [2024-11-05 16:07:48.741069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:27.594 [2024-11-05 16:07:48.741074] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:27.594 [2024-11-05 16:07:48.741080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:27.594 [2024-11-05 16:07:48.741086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:27.594 [2024-11-05 16:07:48.741095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:27.594 [2024-11-05 16:07:48.741100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:27.594 [2024-11-05 16:07:48.741105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.594 [2024-11-05 16:07:48.741110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:27.594 [2024-11-05 16:07:48.741117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:42:27.594 [2024-11-05 16:07:48.741123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.750653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.594 [2024-11-05 16:07:48.750677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:27.594 [2024-11-05 16:07:48.750685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.519 ms 00:42:27.594 [2024-11-05 16:07:48.750691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.750963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.594 [2024-11-05 16:07:48.750993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:27.594 [2024-11-05 16:07:48.751000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:42:27.594 [2024-11-05 16:07:48.751006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.776648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.776674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:27.594 [2024-11-05 16:07:48.776681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.776687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.776725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.776732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:27.594 [2024-11-05 16:07:48.776748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.776754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.776792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.776799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:27.594 [2024-11-05 16:07:48.776805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.776812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.776822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.776829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:27.594 [2024-11-05 16:07:48.776834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.776840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.835470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.835501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:27.594 [2024-11-05 16:07:48.835509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.835515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:27.594 [2024-11-05 16:07:48.883473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:27.594 [2024-11-05 16:07:48.883550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:27.594 [2024-11-05 16:07:48.883594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:27.594 [2024-11-05 16:07:48.883683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:27.594 [2024-11-05 16:07:48.883728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:27.594 [2024-11-05 16:07:48.883792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:27.594 [2024-11-05 16:07:48.883835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:27.594 [2024-11-05 16:07:48.883841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:27.594 [2024-11-05 16:07:48.883847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.594 [2024-11-05 16:07:48.883933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.919 ms, result 0 00:42:28.161 00:42:28.161 00:42:28.161 16:07:49 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:42:28.443 [2024-11-05 16:07:49.557067] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:42:28.443 [2024-11-05 16:07:49.557156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75239 ] 00:42:28.443 [2024-11-05 16:07:49.706867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.443 [2024-11-05 16:07:49.785556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.702 [2024-11-05 16:07:49.989413] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:28.702 [2024-11-05 16:07:49.989459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:28.963 [2024-11-05 16:07:50.142488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.142529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:28.963 [2024-11-05 16:07:50.142546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:28.963 [2024-11-05 16:07:50.142554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.142600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.142610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:28.963 [2024-11-05 16:07:50.142620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:42:28.963 [2024-11-05 16:07:50.142628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.142644] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:28.963 [2024-11-05 16:07:50.143344] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:28.963 [2024-11-05 16:07:50.143373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.143380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:28.963 [2024-11-05 16:07:50.143389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:42:28.963 [2024-11-05 16:07:50.143396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.144428] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:28.963 [2024-11-05 16:07:50.157148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.157181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:28.963 [2024-11-05 16:07:50.157193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.721 ms 00:42:28.963 [2024-11-05 16:07:50.157201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.157254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.157263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:28.963 [2024-11-05 16:07:50.157271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:42:28.963 [2024-11-05 16:07:50.157278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.162273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.162308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:28.963 [2024-11-05 16:07:50.162317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:42:28.963 [2024-11-05 16:07:50.162325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.162392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.162401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:28.963 [2024-11-05 16:07:50.162408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:42:28.963 [2024-11-05 16:07:50.162416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.162457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.162467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:28.963 [2024-11-05 16:07:50.162476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:28.963 [2024-11-05 16:07:50.162483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.162505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:28.963 [2024-11-05 16:07:50.165691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.165717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:28.963 [2024-11-05 16:07:50.165726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:42:28.963 [2024-11-05 16:07:50.165747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.165775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.963 [2024-11-05 16:07:50.165782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:28.963 [2024-11-05 16:07:50.165790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:28.963 [2024-11-05 16:07:50.165798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.963 [2024-11-05 16:07:50.165816] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:28.963 [2024-11-05 16:07:50.165833] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:28.963 [2024-11-05 16:07:50.165867] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:28.963 [2024-11-05 16:07:50.165884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:28.963 [2024-11-05 16:07:50.165986] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:28.963 [2024-11-05 16:07:50.165997] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:28.963 [2024-11-05 16:07:50.166007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:28.963 [2024-11-05 16:07:50.166016] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166025] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166033] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:28.964 [2024-11-05 16:07:50.166040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:28.964 [2024-11-05 16:07:50.166048] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:28.964 [2024-11-05 16:07:50.166055] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:28.964 [2024-11-05 16:07:50.166064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.964 [2024-11-05 16:07:50.166072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:28.964 [2024-11-05 16:07:50.166080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:42:28.964 [2024-11-05 16:07:50.166087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.964 [2024-11-05 16:07:50.166170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.964 [2024-11-05 16:07:50.166179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:28.964 [2024-11-05 16:07:50.166187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:42:28.964 [2024-11-05 16:07:50.166194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.964 [2024-11-05 16:07:50.166302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:28.964 [2024-11-05 16:07:50.166315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:28.964 [2024-11-05 16:07:50.166323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:28.964 [2024-11-05 16:07:50.166346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:28.964 [2024-11-05 16:07:50.166368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:28.964 [2024-11-05 16:07:50.166382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:28.964 [2024-11-05 16:07:50.166391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:28.964 [2024-11-05 16:07:50.166397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:28.964 [2024-11-05 16:07:50.166404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:28.964 [2024-11-05 16:07:50.166411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:28.964 [2024-11-05 16:07:50.166422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:28.964 [2024-11-05 16:07:50.166436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:28.964 [2024-11-05 16:07:50.166458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:28.964 [2024-11-05 16:07:50.166479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:28.964 [2024-11-05 16:07:50.166498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:28.964 [2024-11-05 16:07:50.166520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:28.964 [2024-11-05 16:07:50.166541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:28.964 [2024-11-05 16:07:50.166553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:28.964 [2024-11-05 16:07:50.166560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:28.964 [2024-11-05 16:07:50.166566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:28.964 [2024-11-05 16:07:50.166572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:28.964 [2024-11-05 16:07:50.166578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:28.964 [2024-11-05 16:07:50.166584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:28.964 [2024-11-05 16:07:50.166598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:28.964 [2024-11-05 16:07:50.166604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166610] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:28.964 [2024-11-05 16:07:50.166618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:28.964 [2024-11-05 16:07:50.166625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:28.964 [2024-11-05 16:07:50.166639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:28.964 [2024-11-05 16:07:50.166646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:28.964 [2024-11-05 16:07:50.166653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:28.964 [2024-11-05 16:07:50.166659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:28.964 [2024-11-05 16:07:50.166666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:28.964 [2024-11-05 16:07:50.166672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:28.964 [2024-11-05 16:07:50.166680] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:28.964 [2024-11-05 16:07:50.166688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:28.964 [2024-11-05 16:07:50.166703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:28.964 [2024-11-05 16:07:50.166710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:28.964 [2024-11-05 16:07:50.166717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:28.964 [2024-11-05 16:07:50.166724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:28.964 [2024-11-05 16:07:50.166732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:28.964 [2024-11-05 16:07:50.166750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:28.964 [2024-11-05 16:07:50.166757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:28.964 [2024-11-05 16:07:50.166765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:28.964 [2024-11-05 16:07:50.166772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:28.964 [2024-11-05 16:07:50.166807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:28.964 [2024-11-05 16:07:50.166818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:28.964 [2024-11-05 16:07:50.166834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:28.964 [2024-11-05 16:07:50.166841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:28.964 [2024-11-05 16:07:50.166848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:28.964 [2024-11-05 16:07:50.166857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.964 [2024-11-05 16:07:50.166864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:28.964 [2024-11-05 16:07:50.166872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:42:28.964 [2024-11-05 16:07:50.166881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.964 [2024-11-05 16:07:50.192784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.964 [2024-11-05 16:07:50.192817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:28.964 [2024-11-05 16:07:50.192828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.852 ms 00:42:28.964 [2024-11-05 16:07:50.192836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.964 [2024-11-05 16:07:50.192919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.964 [2024-11-05 16:07:50.192927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:28.964 [2024-11-05 16:07:50.192935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:42:28.964 [2024-11-05 16:07:50.192942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.237123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.237165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:28.965 [2024-11-05 16:07:50.237177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.130 ms 00:42:28.965 [2024-11-05 16:07:50.237185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.237225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.237235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:28.965 [2024-11-05 16:07:50.237244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:28.965 [2024-11-05 16:07:50.237254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.237656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.237687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:28.965 [2024-11-05 16:07:50.237698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:42:28.965 [2024-11-05 16:07:50.237705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.237858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.237868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:28.965 [2024-11-05 16:07:50.237877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:42:28.965 [2024-11-05 16:07:50.237889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.251530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.251563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:28.965 [2024-11-05 16:07:50.251577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:42:28.965 [2024-11-05 16:07:50.251585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.265069] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:28.965 [2024-11-05 16:07:50.265108] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:28.965 [2024-11-05 16:07:50.265120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.265128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:28.965 [2024-11-05 16:07:50.265137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.447 ms 00:42:28.965 [2024-11-05 16:07:50.265144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.289814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.289857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:28.965 [2024-11-05 16:07:50.289868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.627 ms 00:42:28.965 [2024-11-05 16:07:50.289876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.302190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.302226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:28.965 [2024-11-05 16:07:50.302236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.271 ms 00:42:28.965 [2024-11-05 16:07:50.302244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.314265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.314320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:28.965 [2024-11-05 16:07:50.314332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.967 ms 00:42:28.965 [2024-11-05 16:07:50.314339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.965 [2024-11-05 16:07:50.315098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.965 [2024-11-05 16:07:50.315126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:28.965 [2024-11-05 16:07:50.315136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:42:28.965 [2024-11-05 16:07:50.315146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.376361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.376425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:29.227 [2024-11-05 16:07:50.376446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.195 ms 00:42:29.227 [2024-11-05 16:07:50.376455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.387720] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:29.227 [2024-11-05 16:07:50.391156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.391203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:29.227 [2024-11-05 16:07:50.391216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.642 ms 00:42:29.227 [2024-11-05 16:07:50.391225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.391343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.391356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:29.227 [2024-11-05 16:07:50.391366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:42:29.227 [2024-11-05 16:07:50.391378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.391452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.391464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:29.227 [2024-11-05 16:07:50.391473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:42:29.227 [2024-11-05 16:07:50.391481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.391502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.391511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:29.227 [2024-11-05 16:07:50.391520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:29.227 [2024-11-05 16:07:50.391527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.391563] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:29.227 [2024-11-05 16:07:50.391578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.391587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:29.227 [2024-11-05 16:07:50.391595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:42:29.227 [2024-11-05 16:07:50.391604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.417598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.417648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:29.227 [2024-11-05 16:07:50.417661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.975 ms 00:42:29.227 [2024-11-05 16:07:50.417676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.417778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:29.227 [2024-11-05 16:07:50.417790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:29.227 [2024-11-05 16:07:50.417801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:42:29.227 [2024-11-05 16:07:50.417808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:29.227 [2024-11-05 16:07:50.419053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.062 ms, result 0 00:42:30.614  [2024-11-05T16:07:52.915Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-05T16:07:53.857Z] Copying: 38/1024 [MB] (25 MBps) [2024-11-05T16:07:54.797Z] Copying: 57/1024 [MB] (19 MBps) [2024-11-05T16:07:55.739Z] Copying: 71/1024 [MB] (13 MBps) [2024-11-05T16:07:56.682Z] Copying: 85/1024 [MB] (13 MBps) [2024-11-05T16:07:57.624Z] Copying: 96/1024 [MB] (10 MBps) [2024-11-05T16:07:59.010Z] Copying: 119/1024 [MB] (23 MBps) [2024-11-05T16:07:59.954Z] Copying: 142/1024 [MB] (22 MBps) [2024-11-05T16:08:00.896Z] Copying: 161/1024 [MB] (18 MBps) [2024-11-05T16:08:01.838Z] Copying: 176/1024 [MB] (15 MBps) [2024-11-05T16:08:02.779Z] Copying: 199/1024 [MB] (22 MBps) [2024-11-05T16:08:03.721Z] Copying: 212/1024 [MB] (13 MBps) [2024-11-05T16:08:04.693Z] Copying: 234/1024 [MB] (21 MBps) [2024-11-05T16:08:05.637Z] Copying: 254/1024 [MB] (19 MBps) [2024-11-05T16:08:07.024Z] Copying: 266/1024 [MB] (12 MBps) [2024-11-05T16:08:07.969Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-05T16:08:08.915Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-05T16:08:09.859Z] Copying: 311/1024 [MB] (16 MBps) [2024-11-05T16:08:10.800Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-05T16:08:11.745Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-05T16:08:12.690Z] Copying: 344/1024 [MB] (10 MBps) [2024-11-05T16:08:13.634Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-05T16:08:15.024Z] Copying: 366/1024 [MB] (11 MBps) [2024-11-05T16:08:15.967Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-05T16:08:16.912Z] Copying: 387/1024 [MB] (10 MBps) [2024-11-05T16:08:17.855Z] Copying: 398/1024 [MB] (10 MBps) [2024-11-05T16:08:18.801Z] Copying: 409/1024 [MB] (10 MBps) [2024-11-05T16:08:19.744Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-05T16:08:20.688Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-05T16:08:21.632Z] Copying: 441/1024 [MB] (10 MBps) [2024-11-05T16:08:23.020Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-05T16:08:23.965Z] Copying: 464/1024 [MB] (12 MBps) [2024-11-05T16:08:24.911Z] Copying: 474/1024 [MB] (10 MBps) [2024-11-05T16:08:25.854Z] Copying: 485/1024 [MB] (10 MBps) [2024-11-05T16:08:26.799Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-05T16:08:27.745Z] Copying: 506/1024 [MB] (10 MBps) [2024-11-05T16:08:28.690Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-05T16:08:29.635Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-05T16:08:31.024Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-05T16:08:31.968Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-05T16:08:32.938Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-05T16:08:33.891Z] Copying: 569/1024 [MB] (10 MBps) [2024-11-05T16:08:34.835Z] Copying: 580/1024 [MB] (10 MBps) [2024-11-05T16:08:35.777Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-05T16:08:36.721Z] Copying: 601/1024 [MB] (10 MBps) [2024-11-05T16:08:37.666Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-05T16:08:38.604Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-05T16:08:39.989Z] Copying: 636/1024 [MB] (14 MBps) [2024-11-05T16:08:40.933Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-05T16:08:41.878Z] Copying: 662/1024 [MB] (14 MBps) [2024-11-05T16:08:42.817Z] Copying: 673/1024 [MB] (10 MBps) [2024-11-05T16:08:43.761Z] Copying: 688/1024 [MB] (15 MBps) [2024-11-05T16:08:44.702Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-05T16:08:45.644Z] Copying: 718/1024 [MB] (19 MBps) [2024-11-05T16:08:47.028Z] Copying: 741/1024 [MB] (22 MBps) [2024-11-05T16:08:47.605Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-05T16:08:48.993Z] Copying: 770/1024 [MB] (17 MBps) [2024-11-05T16:08:49.936Z] Copying: 780/1024 [MB] (10 MBps) [2024-11-05T16:08:50.880Z] Copying: 795/1024 [MB] (14 MBps) [2024-11-05T16:08:51.823Z] Copying: 814/1024 [MB] (19 MBps) [2024-11-05T16:08:52.767Z] Copying: 831/1024 [MB] (16 MBps) [2024-11-05T16:08:53.713Z] Copying: 852/1024 [MB] (21 MBps) [2024-11-05T16:08:54.657Z] Copying: 872/1024 [MB] (19 MBps) [2024-11-05T16:08:56.043Z] Copying: 891/1024 [MB] (18 MBps) [2024-11-05T16:08:56.615Z] Copying: 908/1024 [MB] (17 MBps) [2024-11-05T16:08:58.001Z] Copying: 919/1024 [MB] (10 MBps) [2024-11-05T16:08:58.945Z] Copying: 930/1024 [MB] (10 MBps) [2024-11-05T16:08:59.890Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-05T16:09:00.835Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-05T16:09:01.785Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-05T16:09:02.730Z] Copying: 975/1024 [MB] (13 MBps) [2024-11-05T16:09:03.674Z] Copying: 995/1024 [MB] (20 MBps) [2024-11-05T16:09:04.612Z] Copying: 1008/1024 [MB] (13 MBps) [2024-11-05T16:09:04.872Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-05 16:09:04.645521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.510 [2024-11-05 16:09:04.645622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:43.510 [2024-11-05 16:09:04.645648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:43.510 [2024-11-05 16:09:04.646362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.510 [2024-11-05 16:09:04.646398] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:43.510 [2024-11-05 16:09:04.648933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.510 [2024-11-05 16:09:04.648977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:43.510 [2024-11-05 16:09:04.648994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:43:43.510 [2024-11-05 16:09:04.649002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.510 [2024-11-05 16:09:04.649199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.510 [2024-11-05 16:09:04.649217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:43.510 [2024-11-05 16:09:04.649226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:43:43.511 [2024-11-05 16:09:04.649233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.651902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.651929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:43.511 [2024-11-05 16:09:04.651938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:43:43.511 [2024-11-05 16:09:04.651946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.657315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.657354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:43.511 [2024-11-05 16:09:04.657364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.349 ms 00:43:43.511 [2024-11-05 16:09:04.657371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.678621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.678667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:43.511 [2024-11-05 16:09:04.678676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.189 ms 00:43:43.511 [2024-11-05 16:09:04.678683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.691299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.691340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:43.511 [2024-11-05 16:09:04.691352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.596 ms 00:43:43.511 [2024-11-05 16:09:04.691359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.691475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.691490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:43.511 [2024-11-05 16:09:04.691498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:43:43.511 [2024-11-05 16:09:04.691505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.710157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.710192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:43.511 [2024-11-05 16:09:04.710200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.640 ms 00:43:43.511 [2024-11-05 16:09:04.710206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.727878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.727916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:43.511 [2024-11-05 16:09:04.727924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.655 ms 00:43:43.511 [2024-11-05 16:09:04.727929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.745275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.745307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:43.511 [2024-11-05 16:09:04.745315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.330 ms 00:43:43.511 [2024-11-05 16:09:04.745321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.762634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.511 [2024-11-05 16:09:04.762660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:43.511 [2024-11-05 16:09:04.762668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.267 ms 00:43:43.511 [2024-11-05 16:09:04.762673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.511 [2024-11-05 16:09:04.762687] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:43.511 [2024-11-05 16:09:04.762698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.762996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:43.511 [2024-11-05 16:09:04.763046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:43.512 [2024-11-05 16:09:04.763310] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:43.512 [2024-11-05 16:09:04.763318] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f3c030-3db9-4227-a747-0995db4fc140 00:43:43.512 [2024-11-05 16:09:04.763325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:43.512 [2024-11-05 16:09:04.763330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:43.512 [2024-11-05 16:09:04.763336] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:43.512 [2024-11-05 16:09:04.763342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:43.512 [2024-11-05 16:09:04.763347] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:43.512 [2024-11-05 16:09:04.763353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:43.512 [2024-11-05 16:09:04.763364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:43.512 [2024-11-05 16:09:04.763369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:43.512 [2024-11-05 16:09:04.763374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:43.512 [2024-11-05 16:09:04.763380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.512 [2024-11-05 16:09:04.763386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:43.512 [2024-11-05 16:09:04.763395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:43:43.512 [2024-11-05 16:09:04.763401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.772829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.512 [2024-11-05 16:09:04.772853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:43.512 [2024-11-05 16:09:04.772860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.414 ms 00:43:43.512 [2024-11-05 16:09:04.772866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.773127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:43.512 [2024-11-05 16:09:04.773143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:43.512 [2024-11-05 16:09:04.773150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:43:43.512 [2024-11-05 16:09:04.773158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.798985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.512 [2024-11-05 16:09:04.799012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:43.512 [2024-11-05 16:09:04.799019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.512 [2024-11-05 16:09:04.799026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.799071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.512 [2024-11-05 16:09:04.799078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:43.512 [2024-11-05 16:09:04.799085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.512 [2024-11-05 16:09:04.799094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.799134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.512 [2024-11-05 16:09:04.799142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:43.512 [2024-11-05 16:09:04.799148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.512 [2024-11-05 16:09:04.799154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.799164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.512 [2024-11-05 16:09:04.799171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:43.512 [2024-11-05 16:09:04.799177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.512 [2024-11-05 16:09:04.799182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.512 [2024-11-05 16:09:04.858611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.512 [2024-11-05 16:09:04.858644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:43.512 [2024-11-05 16:09:04.858654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.512 [2024-11-05 16:09:04.858660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:43.771 [2024-11-05 16:09:04.907085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:43.771 [2024-11-05 16:09:04.907164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:43.771 [2024-11-05 16:09:04.907210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:43.771 [2024-11-05 16:09:04.907302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:43.771 [2024-11-05 16:09:04.907342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:43.771 [2024-11-05 16:09:04.907389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:43.771 [2024-11-05 16:09:04.907432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:43.771 [2024-11-05 16:09:04.907438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:43.771 [2024-11-05 16:09:04.907443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:43.771 [2024-11-05 16:09:04.907531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 261.994 ms, result 0 00:43:44.339 00:43:44.339 00:43:44.339 16:09:05 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:43:46.885 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:43:46.885 16:09:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:43:46.885 [2024-11-05 16:09:07.766844] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:43:46.885 [2024-11-05 16:09:07.766991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76044 ] 00:43:46.885 [2024-11-05 16:09:07.932455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:46.885 [2024-11-05 16:09:08.050379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:47.146 [2024-11-05 16:09:08.343307] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:47.146 [2024-11-05 16:09:08.343394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:47.146 [2024-11-05 16:09:08.504463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.146 [2024-11-05 16:09:08.504532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:47.146 [2024-11-05 16:09:08.504554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:43:47.146 [2024-11-05 16:09:08.504563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.146 [2024-11-05 16:09:08.504623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.146 [2024-11-05 16:09:08.504634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:47.146 [2024-11-05 16:09:08.504646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:43:47.146 [2024-11-05 16:09:08.504654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.146 [2024-11-05 16:09:08.504676] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:47.146 [2024-11-05 16:09:08.505421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:47.146 [2024-11-05 16:09:08.505449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.146 [2024-11-05 16:09:08.505457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:47.146 [2024-11-05 16:09:08.505467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:43:47.146 [2024-11-05 16:09:08.505475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.146 [2024-11-05 16:09:08.507270] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:47.408 [2024-11-05 16:09:08.521839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.521895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:47.408 [2024-11-05 16:09:08.521910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.571 ms 00:43:47.408 [2024-11-05 16:09:08.521919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.522005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.522016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:47.408 [2024-11-05 16:09:08.522024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:43:47.408 [2024-11-05 16:09:08.522032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.530491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.530536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:47.408 [2024-11-05 16:09:08.530547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.377 ms 00:43:47.408 [2024-11-05 16:09:08.530554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.530642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.530652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:47.408 [2024-11-05 16:09:08.530661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:43:47.408 [2024-11-05 16:09:08.530669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.530715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.530725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:47.408 [2024-11-05 16:09:08.530761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:47.408 [2024-11-05 16:09:08.530769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.530794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:47.408 [2024-11-05 16:09:08.534758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.534814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:47.408 [2024-11-05 16:09:08.534826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.970 ms 00:43:47.408 [2024-11-05 16:09:08.534838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.534875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.408 [2024-11-05 16:09:08.534885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:47.408 [2024-11-05 16:09:08.534895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:43:47.408 [2024-11-05 16:09:08.534902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.408 [2024-11-05 16:09:08.534957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:47.408 [2024-11-05 16:09:08.534980] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:47.408 [2024-11-05 16:09:08.535018] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:47.408 [2024-11-05 16:09:08.535037] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:47.408 [2024-11-05 16:09:08.535144] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:47.408 [2024-11-05 16:09:08.535156] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:47.408 [2024-11-05 16:09:08.535167] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:47.409 [2024-11-05 16:09:08.535177] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535186] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535194] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:47.409 [2024-11-05 16:09:08.535202] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:47.409 [2024-11-05 16:09:08.535211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:47.409 [2024-11-05 16:09:08.535219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:47.409 [2024-11-05 16:09:08.535231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.409 [2024-11-05 16:09:08.535239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:47.409 [2024-11-05 16:09:08.535247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:43:47.409 [2024-11-05 16:09:08.535254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.409 [2024-11-05 16:09:08.535337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.409 [2024-11-05 16:09:08.535351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:47.409 [2024-11-05 16:09:08.535359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:43:47.409 [2024-11-05 16:09:08.535367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.409 [2024-11-05 16:09:08.535471] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:47.409 [2024-11-05 16:09:08.535489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:47.409 [2024-11-05 16:09:08.535498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:47.409 [2024-11-05 16:09:08.535525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:47.409 [2024-11-05 16:09:08.535548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:47.409 [2024-11-05 16:09:08.535561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:47.409 [2024-11-05 16:09:08.535568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:47.409 [2024-11-05 16:09:08.535577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:47.409 [2024-11-05 16:09:08.535584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:47.409 [2024-11-05 16:09:08.535592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:47.409 [2024-11-05 16:09:08.535606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:47.409 [2024-11-05 16:09:08.535619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:47.409 [2024-11-05 16:09:08.535640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:47.409 [2024-11-05 16:09:08.535660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:47.409 [2024-11-05 16:09:08.535680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:47.409 [2024-11-05 16:09:08.535700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:47.409 [2024-11-05 16:09:08.535720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:47.409 [2024-11-05 16:09:08.535761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:47.409 [2024-11-05 16:09:08.535769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:47.409 [2024-11-05 16:09:08.535776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:47.409 [2024-11-05 16:09:08.535784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:47.409 [2024-11-05 16:09:08.535791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:47.409 [2024-11-05 16:09:08.535797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:47.409 [2024-11-05 16:09:08.535812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:47.409 [2024-11-05 16:09:08.535819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:47.409 [2024-11-05 16:09:08.535838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:47.409 [2024-11-05 16:09:08.535846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:47.409 [2024-11-05 16:09:08.535862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:47.409 [2024-11-05 16:09:08.535870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:47.409 [2024-11-05 16:09:08.535877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:47.409 [2024-11-05 16:09:08.535885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:47.409 [2024-11-05 16:09:08.535911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:47.409 [2024-11-05 16:09:08.535918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:47.409 [2024-11-05 16:09:08.535927] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:47.409 [2024-11-05 16:09:08.535938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.535947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:47.409 [2024-11-05 16:09:08.535955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:47.409 [2024-11-05 16:09:08.535962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:47.409 [2024-11-05 16:09:08.535969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:47.409 [2024-11-05 16:09:08.535976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:47.409 [2024-11-05 16:09:08.535984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:47.409 [2024-11-05 16:09:08.535992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:47.409 [2024-11-05 16:09:08.536000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:47.409 [2024-11-05 16:09:08.536008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:47.409 [2024-11-05 16:09:08.536016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.536023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.536031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.536040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.536048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:47.409 [2024-11-05 16:09:08.536055] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:47.409 [2024-11-05 16:09:08.536067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.536075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:47.409 [2024-11-05 16:09:08.536083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:47.409 [2024-11-05 16:09:08.536091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:47.409 [2024-11-05 16:09:08.536105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:47.409 [2024-11-05 16:09:08.536113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.409 [2024-11-05 16:09:08.536121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:47.409 [2024-11-05 16:09:08.536129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:43:47.409 [2024-11-05 16:09:08.536136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.409 [2024-11-05 16:09:08.568507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.409 [2024-11-05 16:09:08.568562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:47.409 [2024-11-05 16:09:08.568574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.322 ms 00:43:47.409 [2024-11-05 16:09:08.568583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.409 [2024-11-05 16:09:08.568682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.409 [2024-11-05 16:09:08.568691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:47.409 [2024-11-05 16:09:08.568700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:43:47.410 [2024-11-05 16:09:08.568709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.620479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.620536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:47.410 [2024-11-05 16:09:08.620549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.681 ms 00:43:47.410 [2024-11-05 16:09:08.620558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.620608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.620618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:47.410 [2024-11-05 16:09:08.620628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:47.410 [2024-11-05 16:09:08.620640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.621276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.621313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:47.410 [2024-11-05 16:09:08.621325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:43:47.410 [2024-11-05 16:09:08.621334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.621498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.621585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:47.410 [2024-11-05 16:09:08.621597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:43:47.410 [2024-11-05 16:09:08.621612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.637419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.637484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:47.410 [2024-11-05 16:09:08.637500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.783 ms 00:43:47.410 [2024-11-05 16:09:08.637508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.652157] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:43:47.410 [2024-11-05 16:09:08.652377] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:47.410 [2024-11-05 16:09:08.652398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.652407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:47.410 [2024-11-05 16:09:08.652418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.779 ms 00:43:47.410 [2024-11-05 16:09:08.652425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.678529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.678592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:47.410 [2024-11-05 16:09:08.678604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.056 ms 00:43:47.410 [2024-11-05 16:09:08.678612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.691792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.691840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:47.410 [2024-11-05 16:09:08.691853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.121 ms 00:43:47.410 [2024-11-05 16:09:08.691861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.704680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.704728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:47.410 [2024-11-05 16:09:08.704755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:43:47.410 [2024-11-05 16:09:08.704764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.410 [2024-11-05 16:09:08.705418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.410 [2024-11-05 16:09:08.705453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:47.410 [2024-11-05 16:09:08.705464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:43:47.410 [2024-11-05 16:09:08.705475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.771834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.771904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:47.671 [2024-11-05 16:09:08.771928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.339 ms 00:43:47.671 [2024-11-05 16:09:08.771937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.783750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:47.671 [2024-11-05 16:09:08.787448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.787497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:47.671 [2024-11-05 16:09:08.787510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.447 ms 00:43:47.671 [2024-11-05 16:09:08.787518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.787615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.787628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:47.671 [2024-11-05 16:09:08.787638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:43:47.671 [2024-11-05 16:09:08.787650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.787722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.787760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:47.671 [2024-11-05 16:09:08.787771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:43:47.671 [2024-11-05 16:09:08.787779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.787800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.787809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:47.671 [2024-11-05 16:09:08.787819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:47.671 [2024-11-05 16:09:08.787827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.787863] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:47.671 [2024-11-05 16:09:08.787876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.787886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:47.671 [2024-11-05 16:09:08.787895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:43:47.671 [2024-11-05 16:09:08.787903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.814100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.814315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:47.671 [2024-11-05 16:09:08.814338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.177 ms 00:43:47.671 [2024-11-05 16:09:08.814356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.814594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:47.671 [2024-11-05 16:09:08.814624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:47.671 [2024-11-05 16:09:08.814635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:43:47.671 [2024-11-05 16:09:08.814643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:47.671 [2024-11-05 16:09:08.815954] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.970 ms, result 0 00:43:48.615  [2024-11-05T16:09:10.917Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-05T16:09:11.860Z] Copying: 55/1024 [MB] (35 MBps) [2024-11-05T16:09:13.263Z] Copying: 77/1024 [MB] (21 MBps) [2024-11-05T16:09:13.837Z] Copying: 90/1024 [MB] (13 MBps) [2024-11-05T16:09:15.226Z] Copying: 100/1024 [MB] (10 MBps) [2024-11-05T16:09:16.206Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-05T16:09:17.147Z] Copying: 123632/1048576 [kB] (10064 kBps) [2024-11-05T16:09:18.088Z] Copying: 142/1024 [MB] (21 MBps) [2024-11-05T16:09:19.029Z] Copying: 167/1024 [MB] (24 MBps) [2024-11-05T16:09:19.970Z] Copying: 193/1024 [MB] (26 MBps) [2024-11-05T16:09:20.911Z] Copying: 209/1024 [MB] (16 MBps) [2024-11-05T16:09:21.854Z] Copying: 230/1024 [MB] (21 MBps) [2024-11-05T16:09:23.241Z] Copying: 248/1024 [MB] (17 MBps) [2024-11-05T16:09:24.186Z] Copying: 269/1024 [MB] (20 MBps) [2024-11-05T16:09:25.127Z] Copying: 290/1024 [MB] (21 MBps) [2024-11-05T16:09:26.069Z] Copying: 311/1024 [MB] (21 MBps) [2024-11-05T16:09:27.009Z] Copying: 332/1024 [MB] (20 MBps) [2024-11-05T16:09:27.953Z] Copying: 355/1024 [MB] (23 MBps) [2024-11-05T16:09:28.899Z] Copying: 369/1024 [MB] (13 MBps) [2024-11-05T16:09:29.843Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-05T16:09:31.258Z] Copying: 390/1024 [MB] (10 MBps) [2024-11-05T16:09:31.832Z] Copying: 403/1024 [MB] (13 MBps) [2024-11-05T16:09:33.217Z] Copying: 413/1024 [MB] (10 MBps) [2024-11-05T16:09:34.159Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-05T16:09:35.099Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-05T16:09:36.042Z] Copying: 448/1024 [MB] (14 MBps) [2024-11-05T16:09:36.985Z] Copying: 463/1024 [MB] (15 MBps) [2024-11-05T16:09:37.928Z] Copying: 483/1024 [MB] (19 MBps) [2024-11-05T16:09:38.865Z] Copying: 501/1024 [MB] (18 MBps) [2024-11-05T16:09:40.241Z] Copying: 519/1024 [MB] (17 MBps) [2024-11-05T16:09:41.180Z] Copying: 543/1024 [MB] (24 MBps) [2024-11-05T16:09:42.122Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-05T16:09:43.064Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-05T16:09:44.008Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-05T16:09:44.959Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-05T16:09:45.913Z] Copying: 597/1024 [MB] (11 MBps) [2024-11-05T16:09:46.856Z] Copying: 618/1024 [MB] (21 MBps) [2024-11-05T16:09:48.239Z] Copying: 640/1024 [MB] (21 MBps) [2024-11-05T16:09:49.180Z] Copying: 651/1024 [MB] (11 MBps) [2024-11-05T16:09:50.122Z] Copying: 663/1024 [MB] (12 MBps) [2024-11-05T16:09:51.066Z] Copying: 681/1024 [MB] (17 MBps) [2024-11-05T16:09:52.020Z] Copying: 695/1024 [MB] (13 MBps) [2024-11-05T16:09:52.962Z] Copying: 715/1024 [MB] (19 MBps) [2024-11-05T16:09:53.903Z] Copying: 738/1024 [MB] (23 MBps) [2024-11-05T16:09:54.844Z] Copying: 752/1024 [MB] (13 MBps) [2024-11-05T16:09:56.227Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-05T16:09:57.169Z] Copying: 779/1024 [MB] (15 MBps) [2024-11-05T16:09:58.111Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-05T16:09:59.052Z] Copying: 818148/1048576 [kB] (10076 kBps) [2024-11-05T16:10:00.031Z] Copying: 828144/1048576 [kB] (9996 kBps) [2024-11-05T16:10:00.974Z] Copying: 838352/1048576 [kB] (10208 kBps) [2024-11-05T16:10:01.916Z] Copying: 828/1024 [MB] (10 MBps) [2024-11-05T16:10:02.860Z] Copying: 858872/1048576 [kB] (10224 kBps) [2024-11-05T16:10:04.245Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-05T16:10:05.191Z] Copying: 866/1024 [MB] (17 MBps) [2024-11-05T16:10:06.135Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-05T16:10:07.079Z] Copying: 888/1024 [MB] (10 MBps) [2024-11-05T16:10:08.024Z] Copying: 899/1024 [MB] (10 MBps) [2024-11-05T16:10:08.968Z] Copying: 910/1024 [MB] (11 MBps) [2024-11-05T16:10:09.909Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-05T16:10:10.850Z] Copying: 938/1024 [MB] (17 MBps) [2024-11-05T16:10:12.236Z] Copying: 961/1024 [MB] (23 MBps) [2024-11-05T16:10:13.177Z] Copying: 974/1024 [MB] (12 MBps) [2024-11-05T16:10:14.125Z] Copying: 990/1024 [MB] (15 MBps) [2024-11-05T16:10:15.067Z] Copying: 1010/1024 [MB] (20 MBps) [2024-11-05T16:10:15.638Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-05T16:10:15.638Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-05 16:10:15.328356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.328429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:54.276 [2024-11-05 16:10:15.328446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:54.276 [2024-11-05 16:10:15.328472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.330682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:54.276 [2024-11-05 16:10:15.334889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.334940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:54.276 [2024-11-05 16:10:15.334953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.983 ms 00:44:54.276 [2024-11-05 16:10:15.334965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.346188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.346397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:54.276 [2024-11-05 16:10:15.346421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.415 ms 00:44:54.276 [2024-11-05 16:10:15.346430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.371213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.371270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:54.276 [2024-11-05 16:10:15.371284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.748 ms 00:44:54.276 [2024-11-05 16:10:15.371293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.377452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.377645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:54.276 [2024-11-05 16:10:15.377668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:44:54.276 [2024-11-05 16:10:15.377678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.404709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.404768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:54.276 [2024-11-05 16:10:15.404782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.950 ms 00:44:54.276 [2024-11-05 16:10:15.404790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.421710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.421775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:54.276 [2024-11-05 16:10:15.421790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.868 ms 00:44:54.276 [2024-11-05 16:10:15.421798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.567119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.567187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:54.276 [2024-11-05 16:10:15.567201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 145.263 ms 00:44:54.276 [2024-11-05 16:10:15.567209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.593482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.593530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:54.276 [2024-11-05 16:10:15.593542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.256 ms 00:44:54.276 [2024-11-05 16:10:15.593549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.276 [2024-11-05 16:10:15.620284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.276 [2024-11-05 16:10:15.620346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:54.276 [2024-11-05 16:10:15.620358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.683 ms 00:44:54.276 [2024-11-05 16:10:15.620366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.539 [2024-11-05 16:10:15.646098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.539 [2024-11-05 16:10:15.646148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:54.539 [2024-11-05 16:10:15.646160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.682 ms 00:44:54.539 [2024-11-05 16:10:15.646167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.539 [2024-11-05 16:10:15.671581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.539 [2024-11-05 16:10:15.671797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:54.539 [2024-11-05 16:10:15.671820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.281 ms 00:44:54.539 [2024-11-05 16:10:15.671829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.539 [2024-11-05 16:10:15.671895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:54.539 [2024-11-05 16:10:15.671912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101376 / 261120 wr_cnt: 1 state: open 00:44:54.539 [2024-11-05 16:10:15.671923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.671999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:54.539 [2024-11-05 16:10:15.672512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:54.540 [2024-11-05 16:10:15.672710] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:54.540 [2024-11-05 16:10:15.672718] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f3c030-3db9-4227-a747-0995db4fc140 00:44:54.540 [2024-11-05 16:10:15.672727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101376 00:44:54.540 [2024-11-05 16:10:15.672749] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102336 00:44:54.540 [2024-11-05 16:10:15.672757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101376 00:44:54.540 [2024-11-05 16:10:15.672765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:44:54.540 [2024-11-05 16:10:15.672773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:54.540 [2024-11-05 16:10:15.672788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:54.540 [2024-11-05 16:10:15.672803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:54.540 [2024-11-05 16:10:15.672810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:54.540 [2024-11-05 16:10:15.672816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:54.540 [2024-11-05 16:10:15.672824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.540 [2024-11-05 16:10:15.672833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:54.540 [2024-11-05 16:10:15.672842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:44:54.540 [2024-11-05 16:10:15.672851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.687002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.540 [2024-11-05 16:10:15.687174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:54.540 [2024-11-05 16:10:15.687231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.116 ms 00:44:54.540 [2024-11-05 16:10:15.687265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.687668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:54.540 [2024-11-05 16:10:15.687713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:54.540 [2024-11-05 16:10:15.687923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:44:54.540 [2024-11-05 16:10:15.687949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.724529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.724710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:54.540 [2024-11-05 16:10:15.724810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.724837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.724920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.724942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:54.540 [2024-11-05 16:10:15.724962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.724981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.725065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.725091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:54.540 [2024-11-05 16:10:15.725113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.725221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.725257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.725279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:54.540 [2024-11-05 16:10:15.725300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.725318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.812056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.812256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:54.540 [2024-11-05 16:10:15.812330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.812354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.881402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.881597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:54.540 [2024-11-05 16:10:15.881659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.881684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.881814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.881899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:54.540 [2024-11-05 16:10:15.881925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.881945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.882044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.882070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:54.540 [2024-11-05 16:10:15.882091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.882111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.882239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.882283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:54.540 [2024-11-05 16:10:15.882305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.882324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.882494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.882523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:54.540 [2024-11-05 16:10:15.882544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.882563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.882618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.882641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:54.540 [2024-11-05 16:10:15.882661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.882681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.882763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:54.540 [2024-11-05 16:10:15.882880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:54.540 [2024-11-05 16:10:15.882901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:54.540 [2024-11-05 16:10:15.882921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:54.540 [2024-11-05 16:10:15.883075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 554.680 ms, result 0 00:44:55.924 00:44:55.924 00:44:55.924 16:10:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:44:56.186 [2024-11-05 16:10:17.300397] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:44:56.186 [2024-11-05 16:10:17.300766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76759 ] 00:44:56.186 [2024-11-05 16:10:17.464661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.447 [2024-11-05 16:10:17.585719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.708 [2024-11-05 16:10:17.881254] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:56.708 [2024-11-05 16:10:17.881527] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:56.708 [2024-11-05 16:10:18.042555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.708 [2024-11-05 16:10:18.042826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:56.708 [2024-11-05 16:10:18.042934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:56.708 [2024-11-05 16:10:18.042963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.708 [2024-11-05 16:10:18.043055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.708 [2024-11-05 16:10:18.043083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:56.708 [2024-11-05 16:10:18.043108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:44:56.708 [2024-11-05 16:10:18.043204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.708 [2024-11-05 16:10:18.043241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:56.708 [2024-11-05 16:10:18.043964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:56.708 [2024-11-05 16:10:18.043995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.708 [2024-11-05 16:10:18.044005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:56.708 [2024-11-05 16:10:18.044015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:44:56.708 [2024-11-05 16:10:18.044028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.708 [2024-11-05 16:10:18.045796] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:56.708 [2024-11-05 16:10:18.060132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.708 [2024-11-05 16:10:18.060185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:56.708 [2024-11-05 16:10:18.060199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.339 ms 00:44:56.708 [2024-11-05 16:10:18.060207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.708 [2024-11-05 16:10:18.060295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.708 [2024-11-05 16:10:18.060306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:56.708 [2024-11-05 16:10:18.060316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:44:56.708 [2024-11-05 16:10:18.060323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.709 [2024-11-05 16:10:18.068986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.709 [2024-11-05 16:10:18.069032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:56.709 [2024-11-05 16:10:18.069044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.580 ms 00:44:56.709 [2024-11-05 16:10:18.069053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.709 [2024-11-05 16:10:18.069143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.709 [2024-11-05 16:10:18.069152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:56.709 [2024-11-05 16:10:18.069160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:44:56.709 [2024-11-05 16:10:18.069168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.709 [2024-11-05 16:10:18.069215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.709 [2024-11-05 16:10:18.069226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:56.709 [2024-11-05 16:10:18.069235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:56.709 [2024-11-05 16:10:18.069243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.709 [2024-11-05 16:10:18.069266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:56.972 [2024-11-05 16:10:18.073349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.972 [2024-11-05 16:10:18.073393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:56.972 [2024-11-05 16:10:18.073404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:44:56.972 [2024-11-05 16:10:18.073416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.972 [2024-11-05 16:10:18.073454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.972 [2024-11-05 16:10:18.073463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:56.972 [2024-11-05 16:10:18.073472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:44:56.972 [2024-11-05 16:10:18.073480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.972 [2024-11-05 16:10:18.073535] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:56.972 [2024-11-05 16:10:18.073559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:56.972 [2024-11-05 16:10:18.073597] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:56.972 [2024-11-05 16:10:18.073616] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:56.972 [2024-11-05 16:10:18.073721] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:56.972 [2024-11-05 16:10:18.073758] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:56.972 [2024-11-05 16:10:18.073770] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:56.972 [2024-11-05 16:10:18.073781] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:56.972 [2024-11-05 16:10:18.073790] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:56.972 [2024-11-05 16:10:18.073799] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:56.972 [2024-11-05 16:10:18.073807] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:56.972 [2024-11-05 16:10:18.073815] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:56.972 [2024-11-05 16:10:18.073824] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:56.972 [2024-11-05 16:10:18.073836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.972 [2024-11-05 16:10:18.073844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:56.972 [2024-11-05 16:10:18.073852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:44:56.972 [2024-11-05 16:10:18.073859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.972 [2024-11-05 16:10:18.073942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.972 [2024-11-05 16:10:18.073951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:56.972 [2024-11-05 16:10:18.073960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:44:56.972 [2024-11-05 16:10:18.073967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.972 [2024-11-05 16:10:18.074072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:56.972 [2024-11-05 16:10:18.074086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:56.972 [2024-11-05 16:10:18.074095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:56.972 [2024-11-05 16:10:18.074119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:56.972 [2024-11-05 16:10:18.074139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:56.972 [2024-11-05 16:10:18.074153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:56.972 [2024-11-05 16:10:18.074161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:56.972 [2024-11-05 16:10:18.074169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:56.972 [2024-11-05 16:10:18.074176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:56.972 [2024-11-05 16:10:18.074183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:56.972 [2024-11-05 16:10:18.074197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:56.972 [2024-11-05 16:10:18.074211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:56.972 [2024-11-05 16:10:18.074232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:56.972 [2024-11-05 16:10:18.074253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:56.972 [2024-11-05 16:10:18.074292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:56.972 [2024-11-05 16:10:18.074313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:56.972 [2024-11-05 16:10:18.074327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:56.972 [2024-11-05 16:10:18.074333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:56.972 [2024-11-05 16:10:18.074346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:56.972 [2024-11-05 16:10:18.074353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:56.972 [2024-11-05 16:10:18.074360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:56.972 [2024-11-05 16:10:18.074367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:56.972 [2024-11-05 16:10:18.074376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:56.972 [2024-11-05 16:10:18.074383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.972 [2024-11-05 16:10:18.074389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:56.972 [2024-11-05 16:10:18.074397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:56.973 [2024-11-05 16:10:18.074403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.973 [2024-11-05 16:10:18.074412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:56.973 [2024-11-05 16:10:18.074420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:56.973 [2024-11-05 16:10:18.074429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:56.973 [2024-11-05 16:10:18.074437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:56.973 [2024-11-05 16:10:18.074445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:56.973 [2024-11-05 16:10:18.074452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:56.973 [2024-11-05 16:10:18.074459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:56.973 [2024-11-05 16:10:18.074466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:56.973 [2024-11-05 16:10:18.074473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:56.973 [2024-11-05 16:10:18.074481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:56.973 [2024-11-05 16:10:18.074492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:56.973 [2024-11-05 16:10:18.074502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:56.973 [2024-11-05 16:10:18.074518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:56.973 [2024-11-05 16:10:18.074525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:56.973 [2024-11-05 16:10:18.074533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:56.973 [2024-11-05 16:10:18.074540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:56.973 [2024-11-05 16:10:18.074548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:56.973 [2024-11-05 16:10:18.074555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:56.973 [2024-11-05 16:10:18.074562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:56.973 [2024-11-05 16:10:18.074569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:56.973 [2024-11-05 16:10:18.074577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:56.973 [2024-11-05 16:10:18.074614] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:56.973 [2024-11-05 16:10:18.074626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:56.973 [2024-11-05 16:10:18.074642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:56.973 [2024-11-05 16:10:18.074648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:56.973 [2024-11-05 16:10:18.074655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:56.973 [2024-11-05 16:10:18.074667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.074676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:56.973 [2024-11-05 16:10:18.074684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:44:56.973 [2024-11-05 16:10:18.074691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.107104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.107323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:56.973 [2024-11-05 16:10:18.107345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.368 ms 00:44:56.973 [2024-11-05 16:10:18.107354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.107459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.107469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:56.973 [2024-11-05 16:10:18.107478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:44:56.973 [2024-11-05 16:10:18.107485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.156408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.156633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:56.973 [2024-11-05 16:10:18.156657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.858 ms 00:44:56.973 [2024-11-05 16:10:18.156667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.156721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.156731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:56.973 [2024-11-05 16:10:18.156767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:56.973 [2024-11-05 16:10:18.156782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.157372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.157410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:56.973 [2024-11-05 16:10:18.157423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:44:56.973 [2024-11-05 16:10:18.157431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.157592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.157612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:56.973 [2024-11-05 16:10:18.157621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:44:56.973 [2024-11-05 16:10:18.157635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.173484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.173532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:56.973 [2024-11-05 16:10:18.173548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.829 ms 00:44:56.973 [2024-11-05 16:10:18.173557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.188264] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:44:56.973 [2024-11-05 16:10:18.188314] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:56.973 [2024-11-05 16:10:18.188329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.188338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:56.973 [2024-11-05 16:10:18.188348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.662 ms 00:44:56.973 [2024-11-05 16:10:18.188355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.214711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.214938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:56.973 [2024-11-05 16:10:18.214960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.297 ms 00:44:56.973 [2024-11-05 16:10:18.214969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.228311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.228372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:56.973 [2024-11-05 16:10:18.228385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.290 ms 00:44:56.973 [2024-11-05 16:10:18.228393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.241129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.241179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:56.973 [2024-11-05 16:10:18.241192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.683 ms 00:44:56.973 [2024-11-05 16:10:18.241199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.241915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.241942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:56.973 [2024-11-05 16:10:18.241952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:44:56.973 [2024-11-05 16:10:18.241963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.308409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.308480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:56.973 [2024-11-05 16:10:18.308506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.424 ms 00:44:56.973 [2024-11-05 16:10:18.308515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.320061] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:56.973 [2024-11-05 16:10:18.323846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.323890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:56.973 [2024-11-05 16:10:18.323904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.263 ms 00:44:56.973 [2024-11-05 16:10:18.323913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.324012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.973 [2024-11-05 16:10:18.324025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:56.973 [2024-11-05 16:10:18.324034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:44:56.973 [2024-11-05 16:10:18.324046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.973 [2024-11-05 16:10:18.325844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.974 [2024-11-05 16:10:18.325894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:56.974 [2024-11-05 16:10:18.325906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:44:56.974 [2024-11-05 16:10:18.325915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.974 [2024-11-05 16:10:18.325948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.974 [2024-11-05 16:10:18.325957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:56.974 [2024-11-05 16:10:18.325967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:56.974 [2024-11-05 16:10:18.325975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:56.974 [2024-11-05 16:10:18.326020] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:56.974 [2024-11-05 16:10:18.326034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:56.974 [2024-11-05 16:10:18.326043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:56.974 [2024-11-05 16:10:18.326052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:44:56.974 [2024-11-05 16:10:18.326061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.235 [2024-11-05 16:10:18.352428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.235 [2024-11-05 16:10:18.352480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:57.235 [2024-11-05 16:10:18.352494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.347 ms 00:44:57.235 [2024-11-05 16:10:18.352509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.235 [2024-11-05 16:10:18.352603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:57.235 [2024-11-05 16:10:18.352613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:57.235 [2024-11-05 16:10:18.352623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:44:57.235 [2024-11-05 16:10:18.352631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:57.235 [2024-11-05 16:10:18.354329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.253 ms, result 0 00:44:58.622  [2024-11-05T16:10:20.557Z] Copying: 8488/1048576 [kB] (8488 kBps) [2024-11-05T16:10:21.947Z] Copying: 22/1024 [MB] (14 MBps) [2024-11-05T16:10:22.890Z] Copying: 35/1024 [MB] (12 MBps) [2024-11-05T16:10:23.835Z] Copying: 49/1024 [MB] (13 MBps) [2024-11-05T16:10:24.780Z] Copying: 69/1024 [MB] (20 MBps) [2024-11-05T16:10:25.725Z] Copying: 89/1024 [MB] (19 MBps) [2024-11-05T16:10:26.669Z] Copying: 102/1024 [MB] (12 MBps) [2024-11-05T16:10:27.651Z] Copying: 112/1024 [MB] (10 MBps) [2024-11-05T16:10:28.599Z] Copying: 123/1024 [MB] (10 MBps) [2024-11-05T16:10:29.988Z] Copying: 133/1024 [MB] (10 MBps) [2024-11-05T16:10:30.562Z] Copying: 143/1024 [MB] (10 MBps) [2024-11-05T16:10:31.952Z] Copying: 154/1024 [MB] (10 MBps) [2024-11-05T16:10:32.897Z] Copying: 164/1024 [MB] (10 MBps) [2024-11-05T16:10:33.842Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-05T16:10:34.786Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-05T16:10:35.732Z] Copying: 198/1024 [MB] (11 MBps) [2024-11-05T16:10:36.677Z] Copying: 209/1024 [MB] (11 MBps) [2024-11-05T16:10:37.622Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-05T16:10:38.565Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-05T16:10:39.952Z] Copying: 244/1024 [MB] (11 MBps) [2024-11-05T16:10:40.896Z] Copying: 255/1024 [MB] (11 MBps) [2024-11-05T16:10:41.840Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-05T16:10:42.786Z] Copying: 278/1024 [MB] (11 MBps) [2024-11-05T16:10:43.735Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-05T16:10:44.680Z] Copying: 300/1024 [MB] (10 MBps) [2024-11-05T16:10:45.624Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-05T16:10:46.568Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-05T16:10:47.957Z] Copying: 341/1024 [MB] (20 MBps) [2024-11-05T16:10:48.900Z] Copying: 360/1024 [MB] (18 MBps) [2024-11-05T16:10:49.844Z] Copying: 379/1024 [MB] (18 MBps) [2024-11-05T16:10:50.788Z] Copying: 399/1024 [MB] (20 MBps) [2024-11-05T16:10:51.733Z] Copying: 417/1024 [MB] (18 MBps) [2024-11-05T16:10:52.675Z] Copying: 436/1024 [MB] (19 MBps) [2024-11-05T16:10:53.620Z] Copying: 457/1024 [MB] (20 MBps) [2024-11-05T16:10:54.565Z] Copying: 475/1024 [MB] (18 MBps) [2024-11-05T16:10:55.951Z] Copying: 493/1024 [MB] (17 MBps) [2024-11-05T16:10:56.905Z] Copying: 514/1024 [MB] (20 MBps) [2024-11-05T16:10:57.851Z] Copying: 531/1024 [MB] (16 MBps) [2024-11-05T16:10:58.794Z] Copying: 548/1024 [MB] (17 MBps) [2024-11-05T16:10:59.737Z] Copying: 564/1024 [MB] (15 MBps) [2024-11-05T16:11:00.677Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-05T16:11:01.619Z] Copying: 593/1024 [MB] (18 MBps) [2024-11-05T16:11:02.564Z] Copying: 608/1024 [MB] (15 MBps) [2024-11-05T16:11:03.952Z] Copying: 619/1024 [MB] (10 MBps) [2024-11-05T16:11:04.892Z] Copying: 629/1024 [MB] (10 MBps) [2024-11-05T16:11:05.836Z] Copying: 644/1024 [MB] (14 MBps) [2024-11-05T16:11:06.777Z] Copying: 661/1024 [MB] (17 MBps) [2024-11-05T16:11:07.717Z] Copying: 680/1024 [MB] (19 MBps) [2024-11-05T16:11:08.658Z] Copying: 698/1024 [MB] (18 MBps) [2024-11-05T16:11:09.601Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-05T16:11:10.575Z] Copying: 729/1024 [MB] (20 MBps) [2024-11-05T16:11:11.550Z] Copying: 743/1024 [MB] (14 MBps) [2024-11-05T16:11:12.937Z] Copying: 766/1024 [MB] (22 MBps) [2024-11-05T16:11:13.879Z] Copying: 781/1024 [MB] (15 MBps) [2024-11-05T16:11:14.822Z] Copying: 798/1024 [MB] (16 MBps) [2024-11-05T16:11:15.763Z] Copying: 819/1024 [MB] (21 MBps) [2024-11-05T16:11:16.705Z] Copying: 837/1024 [MB] (17 MBps) [2024-11-05T16:11:17.646Z] Copying: 856/1024 [MB] (18 MBps) [2024-11-05T16:11:18.588Z] Copying: 870/1024 [MB] (13 MBps) [2024-11-05T16:11:19.970Z] Copying: 887/1024 [MB] (17 MBps) [2024-11-05T16:11:20.912Z] Copying: 904/1024 [MB] (16 MBps) [2024-11-05T16:11:21.855Z] Copying: 918/1024 [MB] (14 MBps) [2024-11-05T16:11:22.799Z] Copying: 937/1024 [MB] (19 MBps) [2024-11-05T16:11:23.742Z] Copying: 958/1024 [MB] (20 MBps) [2024-11-05T16:11:24.688Z] Copying: 973/1024 [MB] (14 MBps) [2024-11-05T16:11:25.664Z] Copying: 992/1024 [MB] (18 MBps) [2024-11-05T16:11:26.609Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-05T16:11:27.554Z] Copying: 1013/1024 [MB] (10 MBps) [2024-11-05T16:11:27.817Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-05 16:11:27.569368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.569443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:06.455 [2024-11-05 16:11:27.569461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:06.455 [2024-11-05 16:11:27.569470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.455 [2024-11-05 16:11:27.569504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:06.455 [2024-11-05 16:11:27.575362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.575621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:06.455 [2024-11-05 16:11:27.575650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.839 ms 00:46:06.455 [2024-11-05 16:11:27.575664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.455 [2024-11-05 16:11:27.576034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.576054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:06.455 [2024-11-05 16:11:27.576068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:46:06.455 [2024-11-05 16:11:27.576080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.455 [2024-11-05 16:11:27.584520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.584572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:06.455 [2024-11-05 16:11:27.584585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.409 ms 00:46:06.455 [2024-11-05 16:11:27.584594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.455 [2024-11-05 16:11:27.591219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.591277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:06.455 [2024-11-05 16:11:27.591290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:46:06.455 [2024-11-05 16:11:27.591297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.455 [2024-11-05 16:11:27.618725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.618793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:06.455 [2024-11-05 16:11:27.618806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.373 ms 00:46:06.455 [2024-11-05 16:11:27.618814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.455 [2024-11-05 16:11:27.634640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.455 [2024-11-05 16:11:27.634871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:06.455 [2024-11-05 16:11:27.634895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.776 ms 00:46:06.455 [2024-11-05 16:11:27.634904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.716 [2024-11-05 16:11:27.995794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.716 [2024-11-05 16:11:27.995848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:06.716 [2024-11-05 16:11:27.995864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 360.838 ms 00:46:06.716 [2024-11-05 16:11:27.995874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.716 [2024-11-05 16:11:28.022152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.716 [2024-11-05 16:11:28.022362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:06.716 [2024-11-05 16:11:28.022384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.262 ms 00:46:06.716 [2024-11-05 16:11:28.022393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.716 [2024-11-05 16:11:28.048093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.716 [2024-11-05 16:11:28.048141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:06.716 [2024-11-05 16:11:28.048167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.587 ms 00:46:06.716 [2024-11-05 16:11:28.048174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.716 [2024-11-05 16:11:28.073172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.717 [2024-11-05 16:11:28.073362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:06.717 [2024-11-05 16:11:28.073383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.949 ms 00:46:06.717 [2024-11-05 16:11:28.073391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.980 [2024-11-05 16:11:28.098504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.980 [2024-11-05 16:11:28.098552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:06.980 [2024-11-05 16:11:28.098565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:46:06.980 [2024-11-05 16:11:28.098572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.980 [2024-11-05 16:11:28.098619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:06.980 [2024-11-05 16:11:28.098635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:46:06.980 [2024-11-05 16:11:28.098646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.098997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:06.980 [2024-11-05 16:11:28.099155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:06.981 [2024-11-05 16:11:28.099479] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:06.981 [2024-11-05 16:11:28.099489] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f3c030-3db9-4227-a747-0995db4fc140 00:46:06.981 [2024-11-05 16:11:28.099497] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:46:06.981 [2024-11-05 16:11:28.099506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30656 00:46:06.981 [2024-11-05 16:11:28.099514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29696 00:46:06.981 [2024-11-05 16:11:28.099522] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0323 00:46:06.981 [2024-11-05 16:11:28.099530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:06.981 [2024-11-05 16:11:28.099542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:06.981 [2024-11-05 16:11:28.099549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:06.981 [2024-11-05 16:11:28.099564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:06.981 [2024-11-05 16:11:28.099571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:06.981 [2024-11-05 16:11:28.099579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.981 [2024-11-05 16:11:28.099587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:06.981 [2024-11-05 16:11:28.099596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:46:06.981 [2024-11-05 16:11:28.099603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.113193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.981 [2024-11-05 16:11:28.113234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:06.981 [2024-11-05 16:11:28.113246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.569 ms 00:46:06.981 [2024-11-05 16:11:28.113262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.113670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:06.981 [2024-11-05 16:11:28.113689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:06.981 [2024-11-05 16:11:28.113699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:46:06.981 [2024-11-05 16:11:28.113707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.150673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.150724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:06.981 [2024-11-05 16:11:28.150763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.150773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.150848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.150858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:06.981 [2024-11-05 16:11:28.150869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.150878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.150951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.150963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:06.981 [2024-11-05 16:11:28.150973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.150988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.151005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.151013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:06.981 [2024-11-05 16:11:28.151021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.151029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.237729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.237802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:06.981 [2024-11-05 16:11:28.237823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.237832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.308863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.308921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:06.981 [2024-11-05 16:11:28.308935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.308943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.309020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.309030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:06.981 [2024-11-05 16:11:28.309039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.309047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.309094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.309103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:06.981 [2024-11-05 16:11:28.309112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.309120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.309217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.981 [2024-11-05 16:11:28.309229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:06.981 [2024-11-05 16:11:28.309238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.981 [2024-11-05 16:11:28.309246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.981 [2024-11-05 16:11:28.309283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.982 [2024-11-05 16:11:28.309293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:06.982 [2024-11-05 16:11:28.309302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.982 [2024-11-05 16:11:28.309310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.982 [2024-11-05 16:11:28.309352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.982 [2024-11-05 16:11:28.309362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:06.982 [2024-11-05 16:11:28.309370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.982 [2024-11-05 16:11:28.309378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.982 [2024-11-05 16:11:28.309428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:06.982 [2024-11-05 16:11:28.309438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:06.982 [2024-11-05 16:11:28.309447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:06.982 [2024-11-05 16:11:28.309455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:06.982 [2024-11-05 16:11:28.309595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 740.186 ms, result 0 00:46:07.925 00:46:07.925 00:46:07.925 16:11:29 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:10.470 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:10.470 Process with pid 74402 is not found 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74402 00:46:10.470 16:11:31 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74402 ']' 00:46:10.470 16:11:31 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74402 00:46:10.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74402) - No such process 00:46:10.470 16:11:31 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74402 is not found' 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:46:10.470 Remove shared memory files 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:46:10.470 16:11:31 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:46:10.470 ************************************ 00:46:10.470 END TEST ftl_restore 00:46:10.470 ************************************ 00:46:10.470 00:46:10.470 real 5m1.296s 00:46:10.470 user 4m49.970s 00:46:10.470 sys 0m10.953s 00:46:10.470 16:11:31 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:10.470 16:11:31 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:46:10.470 16:11:31 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:46:10.470 16:11:31 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:46:10.470 16:11:31 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:10.470 16:11:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:46:10.470 ************************************ 00:46:10.470 START TEST ftl_dirty_shutdown 00:46:10.470 ************************************ 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:46:10.470 * Looking for test storage... 00:46:10.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:46:10.470 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:10.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.471 --rc genhtml_branch_coverage=1 00:46:10.471 --rc genhtml_function_coverage=1 00:46:10.471 --rc genhtml_legend=1 00:46:10.471 --rc geninfo_all_blocks=1 00:46:10.471 --rc geninfo_unexecuted_blocks=1 00:46:10.471 00:46:10.471 ' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:10.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.471 --rc genhtml_branch_coverage=1 00:46:10.471 --rc genhtml_function_coverage=1 00:46:10.471 --rc genhtml_legend=1 00:46:10.471 --rc geninfo_all_blocks=1 00:46:10.471 --rc geninfo_unexecuted_blocks=1 00:46:10.471 00:46:10.471 ' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:10.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.471 --rc genhtml_branch_coverage=1 00:46:10.471 --rc genhtml_function_coverage=1 00:46:10.471 --rc genhtml_legend=1 00:46:10.471 --rc geninfo_all_blocks=1 00:46:10.471 --rc geninfo_unexecuted_blocks=1 00:46:10.471 00:46:10.471 ' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:10.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:10.471 --rc genhtml_branch_coverage=1 00:46:10.471 --rc genhtml_function_coverage=1 00:46:10.471 --rc genhtml_legend=1 00:46:10.471 --rc geninfo_all_blocks=1 00:46:10.471 --rc geninfo_unexecuted_blocks=1 00:46:10.471 00:46:10.471 ' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77579 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77579 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 77579 ']' 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:10.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:46:10.471 16:11:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:10.471 [2024-11-05 16:11:31.640214] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:46:10.471 [2024-11-05 16:11:31.640555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77579 ] 00:46:10.471 [2024-11-05 16:11:31.802187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:10.733 [2024-11-05 16:11:31.925947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:46:11.303 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:46:11.565 16:11:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:46:11.827 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:46:11.827 { 00:46:11.827 "name": "nvme0n1", 00:46:11.827 "aliases": [ 00:46:11.827 "c5db7c0a-56b1-4b85-9db8-bdf870c92112" 00:46:11.827 ], 00:46:11.827 "product_name": "NVMe disk", 00:46:11.827 "block_size": 4096, 00:46:11.827 "num_blocks": 1310720, 00:46:11.827 "uuid": "c5db7c0a-56b1-4b85-9db8-bdf870c92112", 00:46:11.827 "numa_id": -1, 00:46:11.827 "assigned_rate_limits": { 00:46:11.827 "rw_ios_per_sec": 0, 00:46:11.827 "rw_mbytes_per_sec": 0, 00:46:11.827 "r_mbytes_per_sec": 0, 00:46:11.827 "w_mbytes_per_sec": 0 00:46:11.827 }, 00:46:11.827 "claimed": true, 00:46:11.827 "claim_type": "read_many_write_one", 00:46:11.827 "zoned": false, 00:46:11.827 "supported_io_types": { 00:46:11.827 "read": true, 00:46:11.827 "write": true, 00:46:11.827 "unmap": true, 00:46:11.827 "flush": true, 00:46:11.827 "reset": true, 00:46:11.827 "nvme_admin": true, 00:46:11.827 "nvme_io": true, 00:46:11.827 "nvme_io_md": false, 00:46:11.827 "write_zeroes": true, 00:46:11.827 "zcopy": false, 00:46:11.827 "get_zone_info": false, 00:46:11.827 "zone_management": false, 00:46:11.827 "zone_append": false, 00:46:11.827 "compare": true, 00:46:11.827 "compare_and_write": false, 00:46:11.827 "abort": true, 00:46:11.827 "seek_hole": false, 00:46:11.827 "seek_data": false, 00:46:11.827 "copy": true, 00:46:11.827 "nvme_iov_md": false 00:46:11.827 }, 00:46:11.827 "driver_specific": { 00:46:11.827 "nvme": [ 00:46:11.827 { 00:46:11.827 "pci_address": "0000:00:11.0", 00:46:11.827 "trid": { 00:46:11.827 "trtype": "PCIe", 00:46:11.827 "traddr": "0000:00:11.0" 00:46:11.827 }, 00:46:11.827 "ctrlr_data": { 00:46:11.827 "cntlid": 0, 00:46:11.827 "vendor_id": "0x1b36", 00:46:11.827 "model_number": "QEMU NVMe Ctrl", 00:46:11.827 "serial_number": "12341", 00:46:11.827 "firmware_revision": "8.0.0", 00:46:11.827 "subnqn": "nqn.2019-08.org.qemu:12341", 00:46:11.827 "oacs": { 00:46:11.827 "security": 0, 00:46:11.827 "format": 1, 00:46:11.827 "firmware": 0, 00:46:11.827 "ns_manage": 1 00:46:11.827 }, 00:46:11.827 "multi_ctrlr": false, 00:46:11.827 "ana_reporting": false 00:46:11.827 }, 00:46:11.827 "vs": { 00:46:11.827 "nvme_version": "1.4" 00:46:11.827 }, 00:46:11.827 "ns_data": { 00:46:11.827 "id": 1, 00:46:11.827 "can_share": false 00:46:11.827 } 00:46:11.827 } 00:46:11.827 ], 00:46:11.827 "mp_policy": "active_passive" 00:46:11.827 } 00:46:11.827 } 00:46:11.827 ]' 00:46:11.827 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:46:11.827 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:46:11.827 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=39f4887c-03a0-4829-aa32-cc5dcac30a86 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:46:12.089 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39f4887c-03a0-4829-aa32-cc5dcac30a86 00:46:12.350 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:46:12.612 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e 00:46:12.612 16:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:46:12.874 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:46:13.134 { 00:46:13.134 "name": "69cb4de0-7524-4da2-bfd2-6af6261df014", 00:46:13.134 "aliases": [ 00:46:13.134 "lvs/nvme0n1p0" 00:46:13.134 ], 00:46:13.134 "product_name": "Logical Volume", 00:46:13.134 "block_size": 4096, 00:46:13.134 "num_blocks": 26476544, 00:46:13.134 "uuid": "69cb4de0-7524-4da2-bfd2-6af6261df014", 00:46:13.134 "assigned_rate_limits": { 00:46:13.134 "rw_ios_per_sec": 0, 00:46:13.134 "rw_mbytes_per_sec": 0, 00:46:13.134 "r_mbytes_per_sec": 0, 00:46:13.134 "w_mbytes_per_sec": 0 00:46:13.134 }, 00:46:13.134 "claimed": false, 00:46:13.134 "zoned": false, 00:46:13.134 "supported_io_types": { 00:46:13.134 "read": true, 00:46:13.134 "write": true, 00:46:13.134 "unmap": true, 00:46:13.134 "flush": false, 00:46:13.134 "reset": true, 00:46:13.134 "nvme_admin": false, 00:46:13.134 "nvme_io": false, 00:46:13.134 "nvme_io_md": false, 00:46:13.134 "write_zeroes": true, 00:46:13.134 "zcopy": false, 00:46:13.134 "get_zone_info": false, 00:46:13.134 "zone_management": false, 00:46:13.134 "zone_append": false, 00:46:13.134 "compare": false, 00:46:13.134 "compare_and_write": false, 00:46:13.134 "abort": false, 00:46:13.134 "seek_hole": true, 00:46:13.134 "seek_data": true, 00:46:13.134 "copy": false, 00:46:13.134 "nvme_iov_md": false 00:46:13.134 }, 00:46:13.134 "driver_specific": { 00:46:13.134 "lvol": { 00:46:13.134 "lvol_store_uuid": "48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e", 00:46:13.134 "base_bdev": "nvme0n1", 00:46:13.134 "thin_provision": true, 00:46:13.134 "num_allocated_clusters": 0, 00:46:13.134 "snapshot": false, 00:46:13.134 "clone": false, 00:46:13.134 "esnap_clone": false 00:46:13.134 } 00:46:13.134 } 00:46:13.134 } 00:46:13.134 ]' 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:46:13.134 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:46:13.392 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:46:13.392 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:46:13.392 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:13.392 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:13.393 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:46:13.393 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:46:13.393 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:46:13.393 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:46:13.652 { 00:46:13.652 "name": "69cb4de0-7524-4da2-bfd2-6af6261df014", 00:46:13.652 "aliases": [ 00:46:13.652 "lvs/nvme0n1p0" 00:46:13.652 ], 00:46:13.652 "product_name": "Logical Volume", 00:46:13.652 "block_size": 4096, 00:46:13.652 "num_blocks": 26476544, 00:46:13.652 "uuid": "69cb4de0-7524-4da2-bfd2-6af6261df014", 00:46:13.652 "assigned_rate_limits": { 00:46:13.652 "rw_ios_per_sec": 0, 00:46:13.652 "rw_mbytes_per_sec": 0, 00:46:13.652 "r_mbytes_per_sec": 0, 00:46:13.652 "w_mbytes_per_sec": 0 00:46:13.652 }, 00:46:13.652 "claimed": false, 00:46:13.652 "zoned": false, 00:46:13.652 "supported_io_types": { 00:46:13.652 "read": true, 00:46:13.652 "write": true, 00:46:13.652 "unmap": true, 00:46:13.652 "flush": false, 00:46:13.652 "reset": true, 00:46:13.652 "nvme_admin": false, 00:46:13.652 "nvme_io": false, 00:46:13.652 "nvme_io_md": false, 00:46:13.652 "write_zeroes": true, 00:46:13.652 "zcopy": false, 00:46:13.652 "get_zone_info": false, 00:46:13.652 "zone_management": false, 00:46:13.652 "zone_append": false, 00:46:13.652 "compare": false, 00:46:13.652 "compare_and_write": false, 00:46:13.652 "abort": false, 00:46:13.652 "seek_hole": true, 00:46:13.652 "seek_data": true, 00:46:13.652 "copy": false, 00:46:13.652 "nvme_iov_md": false 00:46:13.652 }, 00:46:13.652 "driver_specific": { 00:46:13.652 "lvol": { 00:46:13.652 "lvol_store_uuid": "48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e", 00:46:13.652 "base_bdev": "nvme0n1", 00:46:13.652 "thin_provision": true, 00:46:13.652 "num_allocated_clusters": 0, 00:46:13.652 "snapshot": false, 00:46:13.652 "clone": false, 00:46:13.652 "esnap_clone": false 00:46:13.652 } 00:46:13.652 } 00:46:13.652 } 00:46:13.652 ]' 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:46:13.652 16:11:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:46:13.911 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69cb4de0-7524-4da2-bfd2-6af6261df014 00:46:14.169 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:46:14.169 { 00:46:14.169 "name": "69cb4de0-7524-4da2-bfd2-6af6261df014", 00:46:14.169 "aliases": [ 00:46:14.169 "lvs/nvme0n1p0" 00:46:14.169 ], 00:46:14.169 "product_name": "Logical Volume", 00:46:14.169 "block_size": 4096, 00:46:14.169 "num_blocks": 26476544, 00:46:14.169 "uuid": "69cb4de0-7524-4da2-bfd2-6af6261df014", 00:46:14.169 "assigned_rate_limits": { 00:46:14.170 "rw_ios_per_sec": 0, 00:46:14.170 "rw_mbytes_per_sec": 0, 00:46:14.170 "r_mbytes_per_sec": 0, 00:46:14.170 "w_mbytes_per_sec": 0 00:46:14.170 }, 00:46:14.170 "claimed": false, 00:46:14.170 "zoned": false, 00:46:14.170 "supported_io_types": { 00:46:14.170 "read": true, 00:46:14.170 "write": true, 00:46:14.170 "unmap": true, 00:46:14.170 "flush": false, 00:46:14.170 "reset": true, 00:46:14.170 "nvme_admin": false, 00:46:14.170 "nvme_io": false, 00:46:14.170 "nvme_io_md": false, 00:46:14.170 "write_zeroes": true, 00:46:14.170 "zcopy": false, 00:46:14.170 "get_zone_info": false, 00:46:14.170 "zone_management": false, 00:46:14.170 "zone_append": false, 00:46:14.170 "compare": false, 00:46:14.170 "compare_and_write": false, 00:46:14.170 "abort": false, 00:46:14.170 "seek_hole": true, 00:46:14.170 "seek_data": true, 00:46:14.170 "copy": false, 00:46:14.170 "nvme_iov_md": false 00:46:14.170 }, 00:46:14.170 "driver_specific": { 00:46:14.170 "lvol": { 00:46:14.170 "lvol_store_uuid": "48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e", 00:46:14.170 "base_bdev": "nvme0n1", 00:46:14.170 "thin_provision": true, 00:46:14.170 "num_allocated_clusters": 0, 00:46:14.170 "snapshot": false, 00:46:14.170 "clone": false, 00:46:14.170 "esnap_clone": false 00:46:14.170 } 00:46:14.170 } 00:46:14.170 } 00:46:14.170 ]' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 69cb4de0-7524-4da2-bfd2-6af6261df014 --l2p_dram_limit 10' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:46:14.170 16:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 69cb4de0-7524-4da2-bfd2-6af6261df014 --l2p_dram_limit 10 -c nvc0n1p0 00:46:14.431 [2024-11-05 16:11:35.565298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.565339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:14.431 [2024-11-05 16:11:35.565352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:14.431 [2024-11-05 16:11:35.565359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.565403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.565410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:14.431 [2024-11-05 16:11:35.565418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:46:14.431 [2024-11-05 16:11:35.565424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.565443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:14.431 [2024-11-05 16:11:35.566034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:14.431 [2024-11-05 16:11:35.566051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.566057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:14.431 [2024-11-05 16:11:35.566065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:46:14.431 [2024-11-05 16:11:35.566071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.566097] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8db061b2-73ec-45fb-89ba-dce50d3beacb 00:46:14.431 [2024-11-05 16:11:35.567106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.567137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:46:14.431 [2024-11-05 16:11:35.567145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:46:14.431 [2024-11-05 16:11:35.567152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.571948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.572061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:14.431 [2024-11-05 16:11:35.572075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:46:14.431 [2024-11-05 16:11:35.572083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.572152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.572161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:14.431 [2024-11-05 16:11:35.572168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:46:14.431 [2024-11-05 16:11:35.572177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.572214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.572223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:14.431 [2024-11-05 16:11:35.572229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:14.431 [2024-11-05 16:11:35.572238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.572255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:14.431 [2024-11-05 16:11:35.575163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.575260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:14.431 [2024-11-05 16:11:35.575276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:46:14.431 [2024-11-05 16:11:35.575282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.431 [2024-11-05 16:11:35.575311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.431 [2024-11-05 16:11:35.575317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:14.432 [2024-11-05 16:11:35.575325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:46:14.432 [2024-11-05 16:11:35.575331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.432 [2024-11-05 16:11:35.575351] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:46:14.432 [2024-11-05 16:11:35.575459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:14.432 [2024-11-05 16:11:35.575471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:14.432 [2024-11-05 16:11:35.575479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:14.432 [2024-11-05 16:11:35.575489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575503] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:14.432 [2024-11-05 16:11:35.575508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:14.432 [2024-11-05 16:11:35.575517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:14.432 [2024-11-05 16:11:35.575522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:14.432 [2024-11-05 16:11:35.575530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.432 [2024-11-05 16:11:35.575536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:14.432 [2024-11-05 16:11:35.575543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:46:14.432 [2024-11-05 16:11:35.575553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.432 [2024-11-05 16:11:35.575626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.432 [2024-11-05 16:11:35.575633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:14.432 [2024-11-05 16:11:35.575639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:46:14.432 [2024-11-05 16:11:35.575645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.432 [2024-11-05 16:11:35.575728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:14.432 [2024-11-05 16:11:35.575750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:14.432 [2024-11-05 16:11:35.575758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:14.432 [2024-11-05 16:11:35.575776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:14.432 [2024-11-05 16:11:35.575794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:14.432 [2024-11-05 16:11:35.575808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:14.432 [2024-11-05 16:11:35.575813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:14.432 [2024-11-05 16:11:35.575820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:14.432 [2024-11-05 16:11:35.575825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:14.432 [2024-11-05 16:11:35.575831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:14.432 [2024-11-05 16:11:35.575836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:14.432 [2024-11-05 16:11:35.575849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:14.432 [2024-11-05 16:11:35.575869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:14.432 [2024-11-05 16:11:35.575885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:14.432 [2024-11-05 16:11:35.575903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:14.432 [2024-11-05 16:11:35.575919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:14.432 [2024-11-05 16:11:35.575929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:14.432 [2024-11-05 16:11:35.575937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:14.432 [2024-11-05 16:11:35.575948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:14.432 [2024-11-05 16:11:35.575953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:14.432 [2024-11-05 16:11:35.575959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:14.432 [2024-11-05 16:11:35.575964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:14.432 [2024-11-05 16:11:35.575970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:14.432 [2024-11-05 16:11:35.575975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:14.432 [2024-11-05 16:11:35.575986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:14.432 [2024-11-05 16:11:35.575992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.575996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:14.432 [2024-11-05 16:11:35.576004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:14.432 [2024-11-05 16:11:35.576009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:14.432 [2024-11-05 16:11:35.576017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:14.432 [2024-11-05 16:11:35.576022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:14.432 [2024-11-05 16:11:35.576030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:14.432 [2024-11-05 16:11:35.576035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:14.432 [2024-11-05 16:11:35.576041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:14.432 [2024-11-05 16:11:35.576047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:14.432 [2024-11-05 16:11:35.576054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:14.432 [2024-11-05 16:11:35.576062] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:14.432 [2024-11-05 16:11:35.576071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:14.432 [2024-11-05 16:11:35.576086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:14.432 [2024-11-05 16:11:35.576092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:14.432 [2024-11-05 16:11:35.576098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:14.432 [2024-11-05 16:11:35.576103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:14.432 [2024-11-05 16:11:35.576110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:14.432 [2024-11-05 16:11:35.576115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:14.432 [2024-11-05 16:11:35.576122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:14.432 [2024-11-05 16:11:35.576127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:14.432 [2024-11-05 16:11:35.576134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:14.432 [2024-11-05 16:11:35.576165] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:14.432 [2024-11-05 16:11:35.576173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:14.432 [2024-11-05 16:11:35.576186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:14.432 [2024-11-05 16:11:35.576191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:14.432 [2024-11-05 16:11:35.576197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:14.432 [2024-11-05 16:11:35.576203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:14.432 [2024-11-05 16:11:35.576209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:14.432 [2024-11-05 16:11:35.576215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:46:14.433 [2024-11-05 16:11:35.576221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:14.433 [2024-11-05 16:11:35.576262] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:46:14.433 [2024-11-05 16:11:35.576273] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:46:18.658 [2024-11-05 16:11:39.708582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.708669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:46:18.658 [2024-11-05 16:11:39.708688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4132.302 ms 00:46:18.658 [2024-11-05 16:11:39.708700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.741650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.741720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:18.658 [2024-11-05 16:11:39.741755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.670 ms 00:46:18.658 [2024-11-05 16:11:39.741767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.741914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.741928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:18.658 [2024-11-05 16:11:39.741938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:46:18.658 [2024-11-05 16:11:39.741951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.777830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.777886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:18.658 [2024-11-05 16:11:39.777900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.838 ms 00:46:18.658 [2024-11-05 16:11:39.777910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.777948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.777964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:18.658 [2024-11-05 16:11:39.777973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:46:18.658 [2024-11-05 16:11:39.777984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.778573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.778599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:18.658 [2024-11-05 16:11:39.778609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:46:18.658 [2024-11-05 16:11:39.778619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.778769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.778782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:18.658 [2024-11-05 16:11:39.778795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:46:18.658 [2024-11-05 16:11:39.778809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.796480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.796769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:18.658 [2024-11-05 16:11:39.796792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.651 ms 00:46:18.658 [2024-11-05 16:11:39.796804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.810087] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:18.658 [2024-11-05 16:11:39.813937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.813980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:18.658 [2024-11-05 16:11:39.813994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.033 ms 00:46:18.658 [2024-11-05 16:11:39.814002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.927986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.928050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:46:18.658 [2024-11-05 16:11:39.928070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.943 ms 00:46:18.658 [2024-11-05 16:11:39.928080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.928296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.928311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:18.658 [2024-11-05 16:11:39.928327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:46:18.658 [2024-11-05 16:11:39.928336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.955152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.955347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:46:18.658 [2024-11-05 16:11:39.955378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.757 ms 00:46:18.658 [2024-11-05 16:11:39.955387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.981080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.981126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:46:18.658 [2024-11-05 16:11:39.981143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.565 ms 00:46:18.658 [2024-11-05 16:11:39.981151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.658 [2024-11-05 16:11:39.981780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.658 [2024-11-05 16:11:39.981800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:18.658 [2024-11-05 16:11:39.981813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:46:18.658 [2024-11-05 16:11:39.981821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.070295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.945 [2024-11-05 16:11:40.070518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:46:18.945 [2024-11-05 16:11:40.070552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.406 ms 00:46:18.945 [2024-11-05 16:11:40.070561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.098704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.945 [2024-11-05 16:11:40.098777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:46:18.945 [2024-11-05 16:11:40.098794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.965 ms 00:46:18.945 [2024-11-05 16:11:40.098803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.125175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.945 [2024-11-05 16:11:40.125382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:46:18.945 [2024-11-05 16:11:40.125410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.313 ms 00:46:18.945 [2024-11-05 16:11:40.125418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.152172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.945 [2024-11-05 16:11:40.152374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:18.945 [2024-11-05 16:11:40.152405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.386 ms 00:46:18.945 [2024-11-05 16:11:40.152414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.152468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.945 [2024-11-05 16:11:40.152478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:18.945 [2024-11-05 16:11:40.152493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:46:18.945 [2024-11-05 16:11:40.152501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.152599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:18.945 [2024-11-05 16:11:40.152610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:18.945 [2024-11-05 16:11:40.152624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:46:18.945 [2024-11-05 16:11:40.152632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:18.945 [2024-11-05 16:11:40.153972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4588.124 ms, result 0 00:46:18.945 { 00:46:18.945 "name": "ftl0", 00:46:18.945 "uuid": "8db061b2-73ec-45fb-89ba-dce50d3beacb" 00:46:18.945 } 00:46:18.945 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:46:18.945 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:46:19.207 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:46:19.207 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:46:19.207 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:46:19.468 /dev/nbd0 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:46:19.468 1+0 records in 00:46:19.468 1+0 records out 00:46:19.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391267 s, 10.5 MB/s 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:46:19.468 16:11:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:46:19.468 [2024-11-05 16:11:40.741099] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:46:19.468 [2024-11-05 16:11:40.741529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77732 ] 00:46:19.730 [2024-11-05 16:11:40.909214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.730 [2024-11-05 16:11:41.059067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:21.119  [2024-11-05T16:11:43.420Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-05T16:11:44.354Z] Copying: 403/1024 [MB] (216 MBps) [2024-11-05T16:11:45.727Z] Copying: 658/1024 [MB] (255 MBps) [2024-11-05T16:11:45.985Z] Copying: 901/1024 [MB] (243 MBps) [2024-11-05T16:11:46.551Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:46:25.189 00:46:25.189 16:11:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:46:27.090 16:11:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:46:27.090 [2024-11-05 16:11:48.373066] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:46:27.090 [2024-11-05 16:11:48.373155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77820 ] 00:46:27.348 [2024-11-05 16:11:48.520161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:27.348 [2024-11-05 16:11:48.607165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:28.729  [2024-11-05T16:11:51.034Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-05T16:11:51.976Z] Copying: 34/1024 [MB] (12 MBps) [2024-11-05T16:11:52.910Z] Copying: 47/1024 [MB] (12 MBps) [2024-11-05T16:11:53.848Z] Copying: 72/1024 [MB] (25 MBps) [2024-11-05T16:11:54.822Z] Copying: 86/1024 [MB] (14 MBps) [2024-11-05T16:11:56.208Z] Copying: 98/1024 [MB] (11 MBps) [2024-11-05T16:11:57.150Z] Copying: 114/1024 [MB] (16 MBps) [2024-11-05T16:11:58.093Z] Copying: 130/1024 [MB] (16 MBps) [2024-11-05T16:11:59.035Z] Copying: 145/1024 [MB] (15 MBps) [2024-11-05T16:11:59.975Z] Copying: 161/1024 [MB] (15 MBps) [2024-11-05T16:12:00.909Z] Copying: 180/1024 [MB] (19 MBps) [2024-11-05T16:12:01.842Z] Copying: 215/1024 [MB] (34 MBps) [2024-11-05T16:12:03.218Z] Copying: 248/1024 [MB] (33 MBps) [2024-11-05T16:12:04.159Z] Copying: 274/1024 [MB] (26 MBps) [2024-11-05T16:12:05.103Z] Copying: 289/1024 [MB] (14 MBps) [2024-11-05T16:12:06.037Z] Copying: 306/1024 [MB] (17 MBps) [2024-11-05T16:12:06.970Z] Copying: 332/1024 [MB] (26 MBps) [2024-11-05T16:12:07.903Z] Copying: 367/1024 [MB] (35 MBps) [2024-11-05T16:12:08.852Z] Copying: 402/1024 [MB] (34 MBps) [2024-11-05T16:12:10.235Z] Copying: 437/1024 [MB] (35 MBps) [2024-11-05T16:12:10.806Z] Copying: 455/1024 [MB] (17 MBps) [2024-11-05T16:12:12.188Z] Copying: 472/1024 [MB] (16 MBps) [2024-11-05T16:12:13.132Z] Copying: 495/1024 [MB] (22 MBps) [2024-11-05T16:12:14.077Z] Copying: 510/1024 [MB] (15 MBps) [2024-11-05T16:12:15.021Z] Copying: 524/1024 [MB] (13 MBps) [2024-11-05T16:12:15.965Z] Copying: 539/1024 [MB] (14 MBps) [2024-11-05T16:12:16.905Z] Copying: 553/1024 [MB] (14 MBps) [2024-11-05T16:12:17.847Z] Copying: 572/1024 [MB] (19 MBps) [2024-11-05T16:12:19.235Z] Copying: 594/1024 [MB] (21 MBps) [2024-11-05T16:12:19.815Z] Copying: 610/1024 [MB] (15 MBps) [2024-11-05T16:12:21.202Z] Copying: 627/1024 [MB] (16 MBps) [2024-11-05T16:12:22.144Z] Copying: 639/1024 [MB] (12 MBps) [2024-11-05T16:12:23.131Z] Copying: 653/1024 [MB] (13 MBps) [2024-11-05T16:12:24.072Z] Copying: 664/1024 [MB] (11 MBps) [2024-11-05T16:12:25.015Z] Copying: 685/1024 [MB] (20 MBps) [2024-11-05T16:12:25.959Z] Copying: 704/1024 [MB] (19 MBps) [2024-11-05T16:12:26.902Z] Copying: 716/1024 [MB] (11 MBps) [2024-11-05T16:12:27.840Z] Copying: 730/1024 [MB] (13 MBps) [2024-11-05T16:12:29.228Z] Copying: 754/1024 [MB] (24 MBps) [2024-11-05T16:12:29.800Z] Copying: 771/1024 [MB] (16 MBps) [2024-11-05T16:12:31.185Z] Copying: 785/1024 [MB] (14 MBps) [2024-11-05T16:12:32.131Z] Copying: 800/1024 [MB] (14 MBps) [2024-11-05T16:12:33.077Z] Copying: 815/1024 [MB] (15 MBps) [2024-11-05T16:12:34.019Z] Copying: 830/1024 [MB] (14 MBps) [2024-11-05T16:12:34.954Z] Copying: 847/1024 [MB] (16 MBps) [2024-11-05T16:12:35.887Z] Copying: 879/1024 [MB] (31 MBps) [2024-11-05T16:12:36.822Z] Copying: 914/1024 [MB] (35 MBps) [2024-11-05T16:12:38.206Z] Copying: 949/1024 [MB] (34 MBps) [2024-11-05T16:12:39.153Z] Copying: 971/1024 [MB] (22 MBps) [2024-11-05T16:12:40.100Z] Copying: 987/1024 [MB] (16 MBps) [2024-11-05T16:12:40.667Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-05T16:12:41.234Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:47:19.872 00:47:19.872 16:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:47:19.872 16:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:47:20.131 16:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:47:20.393 [2024-11-05 16:12:41.547050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.547090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:20.393 [2024-11-05 16:12:41.547101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:20.393 [2024-11-05 16:12:41.547109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.547128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:20.393 [2024-11-05 16:12:41.549215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.549247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:20.393 [2024-11-05 16:12:41.549257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:47:20.393 [2024-11-05 16:12:41.549263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.551237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.551263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:20.393 [2024-11-05 16:12:41.551272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:47:20.393 [2024-11-05 16:12:41.551278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.564555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.564584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:20.393 [2024-11-05 16:12:41.564595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.259 ms 00:47:20.393 [2024-11-05 16:12:41.564601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.569400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.569423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:20.393 [2024-11-05 16:12:41.569432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.770 ms 00:47:20.393 [2024-11-05 16:12:41.569439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.588076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.588105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:20.393 [2024-11-05 16:12:41.588116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.584 ms 00:47:20.393 [2024-11-05 16:12:41.588122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.600163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.600192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:20.393 [2024-11-05 16:12:41.600203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.007 ms 00:47:20.393 [2024-11-05 16:12:41.600211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.600342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.600351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:20.393 [2024-11-05 16:12:41.600359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:47:20.393 [2024-11-05 16:12:41.600365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.618152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.618280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:20.393 [2024-11-05 16:12:41.618297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.771 ms 00:47:20.393 [2024-11-05 16:12:41.618303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.636082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.636109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:20.393 [2024-11-05 16:12:41.636118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.750 ms 00:47:20.393 [2024-11-05 16:12:41.636124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.653290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.653401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:20.393 [2024-11-05 16:12:41.653416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.134 ms 00:47:20.393 [2024-11-05 16:12:41.653421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.670376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.393 [2024-11-05 16:12:41.670403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:20.393 [2024-11-05 16:12:41.670412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.878 ms 00:47:20.393 [2024-11-05 16:12:41.670418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.393 [2024-11-05 16:12:41.670446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:20.393 [2024-11-05 16:12:41.670457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:20.393 [2024-11-05 16:12:41.670784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.670998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:20.394 [2024-11-05 16:12:41.671149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:20.394 [2024-11-05 16:12:41.671156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8db061b2-73ec-45fb-89ba-dce50d3beacb 00:47:20.394 [2024-11-05 16:12:41.671162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:47:20.394 [2024-11-05 16:12:41.671171] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:47:20.394 [2024-11-05 16:12:41.671176] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:47:20.394 [2024-11-05 16:12:41.671185] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:47:20.394 [2024-11-05 16:12:41.671190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:20.394 [2024-11-05 16:12:41.671197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:20.394 [2024-11-05 16:12:41.671203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:20.394 [2024-11-05 16:12:41.671209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:20.394 [2024-11-05 16:12:41.671214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:20.394 [2024-11-05 16:12:41.671221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.394 [2024-11-05 16:12:41.671226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:20.394 [2024-11-05 16:12:41.671234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:47:20.394 [2024-11-05 16:12:41.671239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.394 [2024-11-05 16:12:41.680819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.394 [2024-11-05 16:12:41.680843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:20.394 [2024-11-05 16:12:41.680855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.555 ms 00:47:20.394 [2024-11-05 16:12:41.680861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.394 [2024-11-05 16:12:41.681137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.394 [2024-11-05 16:12:41.681144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:20.394 [2024-11-05 16:12:41.681151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:47:20.394 [2024-11-05 16:12:41.681157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.394 [2024-11-05 16:12:41.714414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.394 [2024-11-05 16:12:41.714446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:20.394 [2024-11-05 16:12:41.714456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.394 [2024-11-05 16:12:41.714462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.394 [2024-11-05 16:12:41.714506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.394 [2024-11-05 16:12:41.714513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:20.394 [2024-11-05 16:12:41.714520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.394 [2024-11-05 16:12:41.714526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.394 [2024-11-05 16:12:41.714606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.394 [2024-11-05 16:12:41.714614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:20.394 [2024-11-05 16:12:41.714623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.394 [2024-11-05 16:12:41.714629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.394 [2024-11-05 16:12:41.714645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.394 [2024-11-05 16:12:41.714650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:20.394 [2024-11-05 16:12:41.714657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.394 [2024-11-05 16:12:41.714663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.775714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.775753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:20.655 [2024-11-05 16:12:41.775763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.775769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.824689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.824718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:20.655 [2024-11-05 16:12:41.824728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.824745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.824802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.824810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:20.655 [2024-11-05 16:12:41.824817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.824826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.824873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.824881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:20.655 [2024-11-05 16:12:41.824889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.824894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.824962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.824971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:20.655 [2024-11-05 16:12:41.824979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.824984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.825011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.825018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:20.655 [2024-11-05 16:12:41.825025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.825031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.825060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.825066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:20.655 [2024-11-05 16:12:41.825073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.825079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.825114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.655 [2024-11-05 16:12:41.825122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:20.655 [2024-11-05 16:12:41.825129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.655 [2024-11-05 16:12:41.825134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.655 [2024-11-05 16:12:41.825233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.155 ms, result 0 00:47:20.655 true 00:47:20.655 16:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77579 00:47:20.655 16:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77579 00:47:20.655 16:12:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:47:20.655 [2024-11-05 16:12:41.897095] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:47:20.655 [2024-11-05 16:12:41.897184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78377 ] 00:47:20.914 [2024-11-05 16:12:42.044004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:20.914 [2024-11-05 16:12:42.120070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:22.290  [2024-11-05T16:12:44.587Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-05T16:12:45.522Z] Copying: 521/1024 [MB] (261 MBps) [2024-11-05T16:12:46.459Z] Copying: 783/1024 [MB] (261 MBps) [2024-11-05T16:12:47.026Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:47:25.664 00:47:25.664 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77579 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:47:25.664 16:12:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:25.664 [2024-11-05 16:12:46.852661] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:47:25.665 [2024-11-05 16:12:46.852800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78431 ] 00:47:25.665 [2024-11-05 16:12:47.008684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.923 [2024-11-05 16:12:47.084216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:26.183 [2024-11-05 16:12:47.290543] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:26.183 [2024-11-05 16:12:47.290590] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:26.183 [2024-11-05 16:12:47.353312] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:47:26.183 [2024-11-05 16:12:47.353597] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:47:26.183 [2024-11-05 16:12:47.353806] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:47:26.183 [2024-11-05 16:12:47.541469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.183 [2024-11-05 16:12:47.541603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:26.183 [2024-11-05 16:12:47.541619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:47:26.183 [2024-11-05 16:12:47.541626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.183 [2024-11-05 16:12:47.541671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.183 [2024-11-05 16:12:47.541679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:26.183 [2024-11-05 16:12:47.541685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:47:26.183 [2024-11-05 16:12:47.541691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.183 [2024-11-05 16:12:47.541706] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:26.183 [2024-11-05 16:12:47.542281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:26.183 [2024-11-05 16:12:47.542295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.183 [2024-11-05 16:12:47.542301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:26.183 [2024-11-05 16:12:47.542308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:47:26.183 [2024-11-05 16:12:47.542314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.183 [2024-11-05 16:12:47.543274] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:26.445 [2024-11-05 16:12:47.553319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.553349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:26.445 [2024-11-05 16:12:47.553358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.046 ms 00:47:26.445 [2024-11-05 16:12:47.553364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.553405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.553412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:26.445 [2024-11-05 16:12:47.553418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:47:26.445 [2024-11-05 16:12:47.553424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.557886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.557910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:26.445 [2024-11-05 16:12:47.557917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:47:26.445 [2024-11-05 16:12:47.557923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.557976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.557982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:26.445 [2024-11-05 16:12:47.557989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:47:26.445 [2024-11-05 16:12:47.557994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.558025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.558034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:26.445 [2024-11-05 16:12:47.558041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:26.445 [2024-11-05 16:12:47.558046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.558061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:26.445 [2024-11-05 16:12:47.560757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.560779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:26.445 [2024-11-05 16:12:47.560786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:47:26.445 [2024-11-05 16:12:47.560792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.560817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.560823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:26.445 [2024-11-05 16:12:47.560829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:47:26.445 [2024-11-05 16:12:47.560835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.560848] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:26.445 [2024-11-05 16:12:47.560865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:26.445 [2024-11-05 16:12:47.560891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:26.445 [2024-11-05 16:12:47.560902] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:26.445 [2024-11-05 16:12:47.560982] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:26.445 [2024-11-05 16:12:47.560990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:26.445 [2024-11-05 16:12:47.560998] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:26.445 [2024-11-05 16:12:47.561005] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561014] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561022] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:26.445 [2024-11-05 16:12:47.561028] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:26.445 [2024-11-05 16:12:47.561033] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:26.445 [2024-11-05 16:12:47.561039] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:26.445 [2024-11-05 16:12:47.561045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.561050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:26.445 [2024-11-05 16:12:47.561057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:47:26.445 [2024-11-05 16:12:47.561062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.561125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.445 [2024-11-05 16:12:47.561134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:26.445 [2024-11-05 16:12:47.561140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:47:26.445 [2024-11-05 16:12:47.561146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.445 [2024-11-05 16:12:47.561221] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:26.445 [2024-11-05 16:12:47.561231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:26.445 [2024-11-05 16:12:47.561238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:26.445 [2024-11-05 16:12:47.561256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:26.445 [2024-11-05 16:12:47.561272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:26.445 [2024-11-05 16:12:47.561285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:26.445 [2024-11-05 16:12:47.561295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:26.445 [2024-11-05 16:12:47.561300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:26.445 [2024-11-05 16:12:47.561305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:26.445 [2024-11-05 16:12:47.561310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:26.445 [2024-11-05 16:12:47.561315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:26.445 [2024-11-05 16:12:47.561325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:26.445 [2024-11-05 16:12:47.561341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:26.445 [2024-11-05 16:12:47.561357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:26.445 [2024-11-05 16:12:47.561372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:26.445 [2024-11-05 16:12:47.561388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:26.445 [2024-11-05 16:12:47.561406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:26.445 [2024-11-05 16:12:47.561416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:26.445 [2024-11-05 16:12:47.561421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:26.445 [2024-11-05 16:12:47.561426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:26.445 [2024-11-05 16:12:47.561431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:26.445 [2024-11-05 16:12:47.561436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:26.445 [2024-11-05 16:12:47.561440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:26.445 [2024-11-05 16:12:47.561450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:26.445 [2024-11-05 16:12:47.561456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561462] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:26.445 [2024-11-05 16:12:47.561467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:26.445 [2024-11-05 16:12:47.561472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:26.445 [2024-11-05 16:12:47.561479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:26.445 [2024-11-05 16:12:47.561489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:26.446 [2024-11-05 16:12:47.561494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:26.446 [2024-11-05 16:12:47.561499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:26.446 [2024-11-05 16:12:47.561505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:26.446 [2024-11-05 16:12:47.561510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:26.446 [2024-11-05 16:12:47.561515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:26.446 [2024-11-05 16:12:47.561520] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:26.446 [2024-11-05 16:12:47.561527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:26.446 [2024-11-05 16:12:47.561539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:26.446 [2024-11-05 16:12:47.561544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:26.446 [2024-11-05 16:12:47.561549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:26.446 [2024-11-05 16:12:47.561556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:26.446 [2024-11-05 16:12:47.561561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:26.446 [2024-11-05 16:12:47.561567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:26.446 [2024-11-05 16:12:47.561572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:26.446 [2024-11-05 16:12:47.561577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:26.446 [2024-11-05 16:12:47.561582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:26.446 [2024-11-05 16:12:47.561610] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:26.446 [2024-11-05 16:12:47.561615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:26.446 [2024-11-05 16:12:47.561627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:26.446 [2024-11-05 16:12:47.561632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:26.446 [2024-11-05 16:12:47.561639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:26.446 [2024-11-05 16:12:47.561645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.561650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:26.446 [2024-11-05 16:12:47.561657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:47:26.446 [2024-11-05 16:12:47.561662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.582659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.582688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:26.446 [2024-11-05 16:12:47.582697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.966 ms 00:47:26.446 [2024-11-05 16:12:47.582703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.582782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.582793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:26.446 [2024-11-05 16:12:47.582801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:47:26.446 [2024-11-05 16:12:47.582806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.629948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.629994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:26.446 [2024-11-05 16:12:47.630006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.100 ms 00:47:26.446 [2024-11-05 16:12:47.630017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.630068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.630078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:26.446 [2024-11-05 16:12:47.630086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:26.446 [2024-11-05 16:12:47.630094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.630489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.630507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:26.446 [2024-11-05 16:12:47.630516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:47:26.446 [2024-11-05 16:12:47.630524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.630655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.630663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:26.446 [2024-11-05 16:12:47.630671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:47:26.446 [2024-11-05 16:12:47.630678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.644096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.644232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:26.446 [2024-11-05 16:12:47.644250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.397 ms 00:47:26.446 [2024-11-05 16:12:47.644258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.657511] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:26.446 [2024-11-05 16:12:47.657545] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:26.446 [2024-11-05 16:12:47.657557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.657566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:26.446 [2024-11-05 16:12:47.657575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:47:26.446 [2024-11-05 16:12:47.657583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.682987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.683127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:26.446 [2024-11-05 16:12:47.683192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.361 ms 00:47:26.446 [2024-11-05 16:12:47.683215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.695576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.695710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:26.446 [2024-11-05 16:12:47.695780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.236 ms 00:47:26.446 [2024-11-05 16:12:47.695805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.707209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.707327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:26.446 [2024-11-05 16:12:47.707378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.324 ms 00:47:26.446 [2024-11-05 16:12:47.707400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.708814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.708955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:26.446 [2024-11-05 16:12:47.709018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:47:26.446 [2024-11-05 16:12:47.709042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.766501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.766718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:26.446 [2024-11-05 16:12:47.766936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.425 ms 00:47:26.446 [2024-11-05 16:12:47.766976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.777923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:26.446 [2024-11-05 16:12:47.780760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.780862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:26.446 [2024-11-05 16:12:47.780912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.721 ms 00:47:26.446 [2024-11-05 16:12:47.780934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.781058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.781085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:26.446 [2024-11-05 16:12:47.781106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:26.446 [2024-11-05 16:12:47.781124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.781208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.781324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:26.446 [2024-11-05 16:12:47.781344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:47:26.446 [2024-11-05 16:12:47.781364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.781397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.446 [2024-11-05 16:12:47.781422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:26.446 [2024-11-05 16:12:47.781499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:26.446 [2024-11-05 16:12:47.781523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.446 [2024-11-05 16:12:47.781572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:26.447 [2024-11-05 16:12:47.781596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.447 [2024-11-05 16:12:47.781615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:26.447 [2024-11-05 16:12:47.781634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:47:26.447 [2024-11-05 16:12:47.781725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.447 [2024-11-05 16:12:47.805359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.447 [2024-11-05 16:12:47.805480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:26.447 [2024-11-05 16:12:47.805537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.576 ms 00:47:26.447 [2024-11-05 16:12:47.805560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.447 [2024-11-05 16:12:47.805710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.447 [2024-11-05 16:12:47.805887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:26.709 [2024-11-05 16:12:47.805961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:47:26.709 [2024-11-05 16:12:47.805984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.710 [2024-11-05 16:12:47.807013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.077 ms, result 0 00:47:27.656  [2024-11-05T16:12:49.966Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-05T16:12:50.912Z] Copying: 31/1024 [MB] (18 MBps) [2024-11-05T16:12:51.859Z] Copying: 51/1024 [MB] (19 MBps) [2024-11-05T16:12:53.247Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-05T16:12:53.820Z] Copying: 96/1024 [MB] (19 MBps) [2024-11-05T16:12:55.209Z] Copying: 113/1024 [MB] (17 MBps) [2024-11-05T16:12:56.156Z] Copying: 126/1024 [MB] (12 MBps) [2024-11-05T16:12:57.101Z] Copying: 148/1024 [MB] (21 MBps) [2024-11-05T16:12:58.047Z] Copying: 167/1024 [MB] (19 MBps) [2024-11-05T16:12:58.993Z] Copying: 190/1024 [MB] (22 MBps) [2024-11-05T16:12:59.937Z] Copying: 207/1024 [MB] (17 MBps) [2024-11-05T16:13:00.880Z] Copying: 218/1024 [MB] (11 MBps) [2024-11-05T16:13:01.827Z] Copying: 233/1024 [MB] (14 MBps) [2024-11-05T16:13:03.222Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-05T16:13:04.169Z] Copying: 264/1024 [MB] (18 MBps) [2024-11-05T16:13:05.118Z] Copying: 282/1024 [MB] (18 MBps) [2024-11-05T16:13:06.061Z] Copying: 301/1024 [MB] (18 MBps) [2024-11-05T16:13:07.005Z] Copying: 318/1024 [MB] (17 MBps) [2024-11-05T16:13:07.953Z] Copying: 332/1024 [MB] (13 MBps) [2024-11-05T16:13:08.901Z] Copying: 344/1024 [MB] (12 MBps) [2024-11-05T16:13:09.847Z] Copying: 359/1024 [MB] (15 MBps) [2024-11-05T16:13:11.235Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-05T16:13:12.181Z] Copying: 385/1024 [MB] (14 MBps) [2024-11-05T16:13:13.127Z] Copying: 396/1024 [MB] (11 MBps) [2024-11-05T16:13:14.071Z] Copying: 411/1024 [MB] (14 MBps) [2024-11-05T16:13:15.016Z] Copying: 425/1024 [MB] (14 MBps) [2024-11-05T16:13:15.959Z] Copying: 443/1024 [MB] (17 MBps) [2024-11-05T16:13:16.900Z] Copying: 454/1024 [MB] (11 MBps) [2024-11-05T16:13:17.835Z] Copying: 472/1024 [MB] (17 MBps) [2024-11-05T16:13:19.281Z] Copying: 494/1024 [MB] (22 MBps) [2024-11-05T16:13:19.853Z] Copying: 517/1024 [MB] (22 MBps) [2024-11-05T16:13:21.242Z] Copying: 529/1024 [MB] (12 MBps) [2024-11-05T16:13:22.187Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-05T16:13:23.132Z] Copying: 555/1024 [MB] (14 MBps) [2024-11-05T16:13:24.077Z] Copying: 565/1024 [MB] (10 MBps) [2024-11-05T16:13:25.023Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-05T16:13:25.964Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-05T16:13:26.898Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-05T16:13:27.830Z] Copying: 619/1024 [MB] (22 MBps) [2024-11-05T16:13:29.201Z] Copying: 652/1024 [MB] (33 MBps) [2024-11-05T16:13:30.155Z] Copying: 673/1024 [MB] (20 MBps) [2024-11-05T16:13:31.103Z] Copying: 695/1024 [MB] (22 MBps) [2024-11-05T16:13:32.046Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-05T16:13:32.986Z] Copying: 718/1024 [MB] (12 MBps) [2024-11-05T16:13:33.925Z] Copying: 746/1024 [MB] (28 MBps) [2024-11-05T16:13:34.866Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-05T16:13:36.253Z] Copying: 777/1024 [MB] (19 MBps) [2024-11-05T16:13:36.824Z] Copying: 793/1024 [MB] (16 MBps) [2024-11-05T16:13:38.209Z] Copying: 813/1024 [MB] (19 MBps) [2024-11-05T16:13:39.152Z] Copying: 828/1024 [MB] (15 MBps) [2024-11-05T16:13:40.096Z] Copying: 839/1024 [MB] (11 MBps) [2024-11-05T16:13:41.040Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-05T16:13:41.985Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-05T16:13:42.928Z] Copying: 884/1024 [MB] (18 MBps) [2024-11-05T16:13:43.864Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-05T16:13:45.252Z] Copying: 930/1024 [MB] (35 MBps) [2024-11-05T16:13:45.825Z] Copying: 940/1024 [MB] (10 MBps) [2024-11-05T16:13:47.204Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-05T16:13:47.822Z] Copying: 972/1024 [MB] (21 MBps) [2024-11-05T16:13:49.207Z] Copying: 1005/1024 [MB] (33 MBps) [2024-11-05T16:13:50.152Z] Copying: 1021/1024 [MB] (16 MBps) [2024-11-05T16:13:50.152Z] Copying: 1048560/1048576 [kB] (2088 kBps) [2024-11-05T16:13:50.152Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-05 16:13:49.852086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.852163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:28.790 [2024-11-05 16:13:49.852181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:48:28.790 [2024-11-05 16:13:49.852191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:49.853790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:28.790 [2024-11-05 16:13:49.859382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.859432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:28.790 [2024-11-05 16:13:49.859446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.545 ms 00:48:28.790 [2024-11-05 16:13:49.859455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:49.872497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.872546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:28.790 [2024-11-05 16:13:49.872559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.401 ms 00:48:28.790 [2024-11-05 16:13:49.872569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:49.896776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.896844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:28.790 [2024-11-05 16:13:49.896857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.189 ms 00:48:28.790 [2024-11-05 16:13:49.896866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:49.903059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.903105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:28.790 [2024-11-05 16:13:49.903117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:48:28.790 [2024-11-05 16:13:49.903125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:49.929995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.930043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:28.790 [2024-11-05 16:13:49.930056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.824 ms 00:48:28.790 [2024-11-05 16:13:49.930064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:49.946211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:49.946264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:28.790 [2024-11-05 16:13:49.946277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.099 ms 00:48:28.790 [2024-11-05 16:13:49.946286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:50.103692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:50.103779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:28.790 [2024-11-05 16:13:50.103793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 157.355 ms 00:48:28.790 [2024-11-05 16:13:50.103809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:28.790 [2024-11-05 16:13:50.129384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:28.790 [2024-11-05 16:13:50.129429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:28.790 [2024-11-05 16:13:50.129443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.558 ms 00:48:28.790 [2024-11-05 16:13:50.129451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.051 [2024-11-05 16:13:50.154973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:29.051 [2024-11-05 16:13:50.155187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:29.051 [2024-11-05 16:13:50.155218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.477 ms 00:48:29.051 [2024-11-05 16:13:50.155227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.051 [2024-11-05 16:13:50.179804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:29.051 [2024-11-05 16:13:50.179852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:29.051 [2024-11-05 16:13:50.179865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.497 ms 00:48:29.051 [2024-11-05 16:13:50.179871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.051 [2024-11-05 16:13:50.204555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:29.051 [2024-11-05 16:13:50.204601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:29.051 [2024-11-05 16:13:50.204613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.611 ms 00:48:29.051 [2024-11-05 16:13:50.204620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.051 [2024-11-05 16:13:50.204663] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:29.051 [2024-11-05 16:13:50.204679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100352 / 261120 wr_cnt: 1 state: open 00:48:29.051 [2024-11-05 16:13:50.204690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.204998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:29.051 [2024-11-05 16:13:50.205406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:29.052 [2024-11-05 16:13:50.205705] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:29.052 [2024-11-05 16:13:50.205714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8db061b2-73ec-45fb-89ba-dce50d3beacb 00:48:29.052 [2024-11-05 16:13:50.205723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100352 00:48:29.052 [2024-11-05 16:13:50.205752] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101312 00:48:29.052 [2024-11-05 16:13:50.205767] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100352 00:48:29.052 [2024-11-05 16:13:50.205777] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:48:29.052 [2024-11-05 16:13:50.205785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:29.052 [2024-11-05 16:13:50.205794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:29.052 [2024-11-05 16:13:50.205802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:29.052 [2024-11-05 16:13:50.205809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:29.052 [2024-11-05 16:13:50.205816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:29.052 [2024-11-05 16:13:50.205824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:29.052 [2024-11-05 16:13:50.205833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:29.052 [2024-11-05 16:13:50.205841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:48:29.052 [2024-11-05 16:13:50.205848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.219502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:29.052 [2024-11-05 16:13:50.219656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:29.052 [2024-11-05 16:13:50.219711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.628 ms 00:48:29.052 [2024-11-05 16:13:50.219755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.220167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:29.052 [2024-11-05 16:13:50.220203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:29.052 [2024-11-05 16:13:50.220303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:48:29.052 [2024-11-05 16:13:50.220327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.256680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.052 [2024-11-05 16:13:50.256874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:29.052 [2024-11-05 16:13:50.256943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.052 [2024-11-05 16:13:50.256968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.257047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.052 [2024-11-05 16:13:50.257072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:29.052 [2024-11-05 16:13:50.257092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.052 [2024-11-05 16:13:50.257111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.257194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.052 [2024-11-05 16:13:50.257271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:29.052 [2024-11-05 16:13:50.257297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.052 [2024-11-05 16:13:50.257316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.257345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.052 [2024-11-05 16:13:50.257366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:29.052 [2024-11-05 16:13:50.257385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.052 [2024-11-05 16:13:50.257404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.342837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.052 [2024-11-05 16:13:50.343066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:29.052 [2024-11-05 16:13:50.343128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.052 [2024-11-05 16:13:50.343151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.052 [2024-11-05 16:13:50.412589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.052 [2024-11-05 16:13:50.412816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:29.311 [2024-11-05 16:13:50.412937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.412968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.312 [2024-11-05 16:13:50.413098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:29.312 [2024-11-05 16:13:50.413119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.413139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.312 [2024-11-05 16:13:50.413263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:29.312 [2024-11-05 16:13:50.413276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.413285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.312 [2024-11-05 16:13:50.413416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:29.312 [2024-11-05 16:13:50.413425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.413433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.312 [2024-11-05 16:13:50.413478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:29.312 [2024-11-05 16:13:50.413486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.413494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.312 [2024-11-05 16:13:50.413550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:29.312 [2024-11-05 16:13:50.413559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.413567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:29.312 [2024-11-05 16:13:50.413627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:29.312 [2024-11-05 16:13:50.413636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:29.312 [2024-11-05 16:13:50.413644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:29.312 [2024-11-05 16:13:50.413810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 562.137 ms, result 0 00:48:30.694 00:48:30.694 00:48:30.694 16:13:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:48:32.599 16:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:48:32.599 [2024-11-05 16:13:53.824594] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:48:32.599 [2024-11-05 16:13:53.824690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79112 ] 00:48:32.860 [2024-11-05 16:13:53.980020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:32.860 [2024-11-05 16:13:54.093022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:33.121 [2024-11-05 16:13:54.383583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:48:33.121 [2024-11-05 16:13:54.383673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:48:33.383 [2024-11-05 16:13:54.545954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.546017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:33.383 [2024-11-05 16:13:54.546039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:48:33.383 [2024-11-05 16:13:54.546048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.546106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.546118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:33.383 [2024-11-05 16:13:54.546129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:48:33.383 [2024-11-05 16:13:54.546137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.546158] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:33.383 [2024-11-05 16:13:54.546901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:33.383 [2024-11-05 16:13:54.546922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.546931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:33.383 [2024-11-05 16:13:54.546940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:48:33.383 [2024-11-05 16:13:54.546948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.548699] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:48:33.383 [2024-11-05 16:13:54.563165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.563217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:48:33.383 [2024-11-05 16:13:54.563230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.468 ms 00:48:33.383 [2024-11-05 16:13:54.563239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.563322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.563332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:48:33.383 [2024-11-05 16:13:54.563341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:48:33.383 [2024-11-05 16:13:54.563349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.571886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.571927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:33.383 [2024-11-05 16:13:54.571938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.457 ms 00:48:33.383 [2024-11-05 16:13:54.571947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.572034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.572043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:33.383 [2024-11-05 16:13:54.572052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:48:33.383 [2024-11-05 16:13:54.572061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.572106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.572115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:33.383 [2024-11-05 16:13:54.572124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:48:33.383 [2024-11-05 16:13:54.572132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.572156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:33.383 [2024-11-05 16:13:54.576244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.576285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:33.383 [2024-11-05 16:13:54.576297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.093 ms 00:48:33.383 [2024-11-05 16:13:54.576308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.576343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.576352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:33.383 [2024-11-05 16:13:54.576361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:48:33.383 [2024-11-05 16:13:54.576369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.576420] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:48:33.383 [2024-11-05 16:13:54.576445] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:48:33.383 [2024-11-05 16:13:54.576482] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:48:33.383 [2024-11-05 16:13:54.576502] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:48:33.383 [2024-11-05 16:13:54.576607] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:48:33.383 [2024-11-05 16:13:54.576620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:33.383 [2024-11-05 16:13:54.576631] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:48:33.383 [2024-11-05 16:13:54.576643] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:33.383 [2024-11-05 16:13:54.576652] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:33.383 [2024-11-05 16:13:54.576660] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:33.383 [2024-11-05 16:13:54.576668] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:33.383 [2024-11-05 16:13:54.576677] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:48:33.383 [2024-11-05 16:13:54.576685] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:48:33.383 [2024-11-05 16:13:54.576696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.576703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:33.383 [2024-11-05 16:13:54.576712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:48:33.383 [2024-11-05 16:13:54.576720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.576828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.383 [2024-11-05 16:13:54.576838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:33.383 [2024-11-05 16:13:54.576847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:48:33.383 [2024-11-05 16:13:54.576857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.383 [2024-11-05 16:13:54.576963] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:33.383 [2024-11-05 16:13:54.576978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:33.383 [2024-11-05 16:13:54.576988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:33.383 [2024-11-05 16:13:54.576996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:33.383 [2024-11-05 16:13:54.577005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:33.383 [2024-11-05 16:13:54.577012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:33.383 [2024-11-05 16:13:54.577019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:33.383 [2024-11-05 16:13:54.577026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:33.383 [2024-11-05 16:13:54.577033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:33.383 [2024-11-05 16:13:54.577040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:33.383 [2024-11-05 16:13:54.577046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:33.383 [2024-11-05 16:13:54.577053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:33.383 [2024-11-05 16:13:54.577059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:33.383 [2024-11-05 16:13:54.577066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:33.383 [2024-11-05 16:13:54.577074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:48:33.384 [2024-11-05 16:13:54.577088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:33.384 [2024-11-05 16:13:54.577102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:33.384 [2024-11-05 16:13:54.577122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:33.384 [2024-11-05 16:13:54.577143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:33.384 [2024-11-05 16:13:54.577163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:33.384 [2024-11-05 16:13:54.577182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:33.384 [2024-11-05 16:13:54.577206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:33.384 [2024-11-05 16:13:54.577221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:33.384 [2024-11-05 16:13:54.577229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:48:33.384 [2024-11-05 16:13:54.577236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:33.384 [2024-11-05 16:13:54.577243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:48:33.384 [2024-11-05 16:13:54.577250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:48:33.384 [2024-11-05 16:13:54.577257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:48:33.384 [2024-11-05 16:13:54.577270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:48:33.384 [2024-11-05 16:13:54.577277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577283] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:33.384 [2024-11-05 16:13:54.577292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:33.384 [2024-11-05 16:13:54.577300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:33.384 [2024-11-05 16:13:54.577324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:33.384 [2024-11-05 16:13:54.577331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:33.384 [2024-11-05 16:13:54.577338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:33.384 [2024-11-05 16:13:54.577345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:33.384 [2024-11-05 16:13:54.577353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:33.384 [2024-11-05 16:13:54.577360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:33.384 [2024-11-05 16:13:54.577369] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:33.384 [2024-11-05 16:13:54.577379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:33.384 [2024-11-05 16:13:54.577398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:48:33.384 [2024-11-05 16:13:54.577405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:48:33.384 [2024-11-05 16:13:54.577414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:48:33.384 [2024-11-05 16:13:54.577422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:48:33.384 [2024-11-05 16:13:54.577430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:48:33.384 [2024-11-05 16:13:54.577437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:48:33.384 [2024-11-05 16:13:54.577446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:48:33.384 [2024-11-05 16:13:54.577454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:48:33.384 [2024-11-05 16:13:54.577464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:48:33.384 [2024-11-05 16:13:54.577503] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:33.384 [2024-11-05 16:13:54.577516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:33.384 [2024-11-05 16:13:54.577533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:33.384 [2024-11-05 16:13:54.577540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:33.384 [2024-11-05 16:13:54.577547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:33.384 [2024-11-05 16:13:54.577555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.577563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:33.384 [2024-11-05 16:13:54.577571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:48:33.384 [2024-11-05 16:13:54.577584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.610143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.610421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:33.384 [2024-11-05 16:13:54.610444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.511 ms 00:48:33.384 [2024-11-05 16:13:54.610453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.610557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.610566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:33.384 [2024-11-05 16:13:54.610577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:48:33.384 [2024-11-05 16:13:54.610585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.657578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.657636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:33.384 [2024-11-05 16:13:54.657650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.930 ms 00:48:33.384 [2024-11-05 16:13:54.657659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.657709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.657719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:33.384 [2024-11-05 16:13:54.657729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:48:33.384 [2024-11-05 16:13:54.657768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.658447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.658480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:33.384 [2024-11-05 16:13:54.658492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:48:33.384 [2024-11-05 16:13:54.658501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.658662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.658673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:33.384 [2024-11-05 16:13:54.658683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:48:33.384 [2024-11-05 16:13:54.658696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.674350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.674393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:33.384 [2024-11-05 16:13:54.674407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.633 ms 00:48:33.384 [2024-11-05 16:13:54.674415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.688816] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:48:33.384 [2024-11-05 16:13:54.689012] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:48:33.384 [2024-11-05 16:13:54.689034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.689043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:48:33.384 [2024-11-05 16:13:54.689053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.513 ms 00:48:33.384 [2024-11-05 16:13:54.689061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.384 [2024-11-05 16:13:54.714964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.384 [2024-11-05 16:13:54.715021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:48:33.384 [2024-11-05 16:13:54.715034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.719 ms 00:48:33.385 [2024-11-05 16:13:54.715042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.385 [2024-11-05 16:13:54.727859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.385 [2024-11-05 16:13:54.728049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:48:33.385 [2024-11-05 16:13:54.728070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:48:33.385 [2024-11-05 16:13:54.728077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.385 [2024-11-05 16:13:54.740624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.385 [2024-11-05 16:13:54.740668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:48:33.385 [2024-11-05 16:13:54.740681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.508 ms 00:48:33.385 [2024-11-05 16:13:54.740688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.385 [2024-11-05 16:13:54.741344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.385 [2024-11-05 16:13:54.741378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:33.385 [2024-11-05 16:13:54.741390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:48:33.385 [2024-11-05 16:13:54.741401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.806651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.806770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:48:33.646 [2024-11-05 16:13:54.806793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.228 ms 00:48:33.646 [2024-11-05 16:13:54.806803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.818116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:48:33.646 [2024-11-05 16:13:54.821641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.821688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:33.646 [2024-11-05 16:13:54.821701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.778 ms 00:48:33.646 [2024-11-05 16:13:54.821709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.821818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.821831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:48:33.646 [2024-11-05 16:13:54.821841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:48:33.646 [2024-11-05 16:13:54.821852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.823618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.823668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:33.646 [2024-11-05 16:13:54.823680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:48:33.646 [2024-11-05 16:13:54.823688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.823717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.823726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:33.646 [2024-11-05 16:13:54.823748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:33.646 [2024-11-05 16:13:54.823756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.823800] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:48:33.646 [2024-11-05 16:13:54.823815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.823824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:48:33.646 [2024-11-05 16:13:54.823833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:48:33.646 [2024-11-05 16:13:54.823841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.849988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.850170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:33.646 [2024-11-05 16:13:54.850236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.127 ms 00:48:33.646 [2024-11-05 16:13:54.850294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.850387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:33.646 [2024-11-05 16:13:54.850413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:33.646 [2024-11-05 16:13:54.850436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:48:33.646 [2024-11-05 16:13:54.850456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:33.646 [2024-11-05 16:13:54.851973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.383 ms, result 0 00:48:35.032  [2024-11-05T16:13:57.337Z] Copying: 1052/1048576 [kB] (1052 kBps) [2024-11-05T16:13:58.280Z] Copying: 3960/1048576 [kB] (2908 kBps) [2024-11-05T16:13:59.223Z] Copying: 13240/1048576 [kB] (9280 kBps) [2024-11-05T16:14:00.159Z] Copying: 36/1024 [MB] (23 MBps) [2024-11-05T16:14:01.106Z] Copying: 68/1024 [MB] (31 MBps) [2024-11-05T16:14:02.052Z] Copying: 86/1024 [MB] (18 MBps) [2024-11-05T16:14:03.442Z] Copying: 110/1024 [MB] (23 MBps) [2024-11-05T16:14:04.379Z] Copying: 126/1024 [MB] (16 MBps) [2024-11-05T16:14:05.322Z] Copying: 159/1024 [MB] (32 MBps) [2024-11-05T16:14:06.265Z] Copying: 193/1024 [MB] (34 MBps) [2024-11-05T16:14:07.206Z] Copying: 214/1024 [MB] (20 MBps) [2024-11-05T16:14:08.145Z] Copying: 241/1024 [MB] (26 MBps) [2024-11-05T16:14:09.082Z] Copying: 261/1024 [MB] (20 MBps) [2024-11-05T16:14:10.460Z] Copying: 286/1024 [MB] (25 MBps) [2024-11-05T16:14:11.398Z] Copying: 324/1024 [MB] (38 MBps) [2024-11-05T16:14:12.339Z] Copying: 340/1024 [MB] (15 MBps) [2024-11-05T16:14:13.280Z] Copying: 378/1024 [MB] (38 MBps) [2024-11-05T16:14:14.221Z] Copying: 407/1024 [MB] (29 MBps) [2024-11-05T16:14:15.165Z] Copying: 423/1024 [MB] (15 MBps) [2024-11-05T16:14:16.156Z] Copying: 440/1024 [MB] (16 MBps) [2024-11-05T16:14:17.096Z] Copying: 456/1024 [MB] (15 MBps) [2024-11-05T16:14:18.467Z] Copying: 471/1024 [MB] (15 MBps) [2024-11-05T16:14:19.401Z] Copying: 502/1024 [MB] (30 MBps) [2024-11-05T16:14:20.340Z] Copying: 526/1024 [MB] (24 MBps) [2024-11-05T16:14:21.278Z] Copying: 555/1024 [MB] (29 MBps) [2024-11-05T16:14:22.223Z] Copying: 580/1024 [MB] (24 MBps) [2024-11-05T16:14:23.166Z] Copying: 596/1024 [MB] (15 MBps) [2024-11-05T16:14:24.134Z] Copying: 619/1024 [MB] (23 MBps) [2024-11-05T16:14:25.070Z] Copying: 639/1024 [MB] (20 MBps) [2024-11-05T16:14:26.455Z] Copying: 685/1024 [MB] (45 MBps) [2024-11-05T16:14:27.399Z] Copying: 702/1024 [MB] (17 MBps) [2024-11-05T16:14:28.344Z] Copying: 721/1024 [MB] (18 MBps) [2024-11-05T16:14:29.287Z] Copying: 742/1024 [MB] (21 MBps) [2024-11-05T16:14:30.230Z] Copying: 757/1024 [MB] (15 MBps) [2024-11-05T16:14:31.176Z] Copying: 792/1024 [MB] (34 MBps) [2024-11-05T16:14:32.117Z] Copying: 806/1024 [MB] (14 MBps) [2024-11-05T16:14:33.057Z] Copying: 831/1024 [MB] (25 MBps) [2024-11-05T16:14:34.440Z] Copying: 853/1024 [MB] (22 MBps) [2024-11-05T16:14:35.380Z] Copying: 870/1024 [MB] (16 MBps) [2024-11-05T16:14:36.322Z] Copying: 889/1024 [MB] (19 MBps) [2024-11-05T16:14:37.261Z] Copying: 919/1024 [MB] (29 MBps) [2024-11-05T16:14:38.201Z] Copying: 946/1024 [MB] (26 MBps) [2024-11-05T16:14:39.143Z] Copying: 975/1024 [MB] (28 MBps) [2024-11-05T16:14:40.083Z] Copying: 991/1024 [MB] (15 MBps) [2024-11-05T16:14:40.344Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-05 16:14:40.199617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.199682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:18.982 [2024-11-05 16:14:40.199701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:49:18.982 [2024-11-05 16:14:40.199709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.199731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:18.982 [2024-11-05 16:14:40.202674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.202710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:18.982 [2024-11-05 16:14:40.202721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:49:18.982 [2024-11-05 16:14:40.202729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.202965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.202976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:18.982 [2024-11-05 16:14:40.202988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:49:18.982 [2024-11-05 16:14:40.202997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.216171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.216221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:18.982 [2024-11-05 16:14:40.216234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.157 ms 00:49:18.982 [2024-11-05 16:14:40.216242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.222560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.222590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:18.982 [2024-11-05 16:14:40.222601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.286 ms 00:49:18.982 [2024-11-05 16:14:40.222613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.247637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.247670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:18.982 [2024-11-05 16:14:40.247682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.973 ms 00:49:18.982 [2024-11-05 16:14:40.247694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.262112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.262262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:18.982 [2024-11-05 16:14:40.262280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.382 ms 00:49:18.982 [2024-11-05 16:14:40.262289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.267164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.267198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:18.982 [2024-11-05 16:14:40.267208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.844 ms 00:49:18.982 [2024-11-05 16:14:40.267215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.292377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.292410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:18.982 [2024-11-05 16:14:40.292422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.142 ms 00:49:18.982 [2024-11-05 16:14:40.292433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.317004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.317049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:18.982 [2024-11-05 16:14:40.317068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.534 ms 00:49:18.982 [2024-11-05 16:14:40.317074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:18.982 [2024-11-05 16:14:40.341267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:18.982 [2024-11-05 16:14:40.341307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:18.982 [2024-11-05 16:14:40.341318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.156 ms 00:49:18.982 [2024-11-05 16:14:40.341325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.244 [2024-11-05 16:14:40.365798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:19.244 [2024-11-05 16:14:40.365833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:19.244 [2024-11-05 16:14:40.365843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.412 ms 00:49:19.244 [2024-11-05 16:14:40.365851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.244 [2024-11-05 16:14:40.365885] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:19.244 [2024-11-05 16:14:40.365898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:49:19.245 [2024-11-05 16:14:40.365908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:49:19.245 [2024-11-05 16:14:40.365916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.365995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:19.245 [2024-11-05 16:14:40.366660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:19.246 [2024-11-05 16:14:40.366767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:19.246 [2024-11-05 16:14:40.366775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8db061b2-73ec-45fb-89ba-dce50d3beacb 00:49:19.246 [2024-11-05 16:14:40.366784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:49:19.246 [2024-11-05 16:14:40.366791] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164288 00:49:19.246 [2024-11-05 16:14:40.366798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162304 00:49:19.246 [2024-11-05 16:14:40.366810] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:49:19.246 [2024-11-05 16:14:40.366817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:19.246 [2024-11-05 16:14:40.366824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:19.246 [2024-11-05 16:14:40.366832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:19.246 [2024-11-05 16:14:40.366845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:19.246 [2024-11-05 16:14:40.366851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:19.246 [2024-11-05 16:14:40.366858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:19.246 [2024-11-05 16:14:40.366866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:19.246 [2024-11-05 16:14:40.366874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:49:19.246 [2024-11-05 16:14:40.366882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.379965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:19.246 [2024-11-05 16:14:40.380093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:19.246 [2024-11-05 16:14:40.380108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.064 ms 00:49:19.246 [2024-11-05 16:14:40.380117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.380487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:19.246 [2024-11-05 16:14:40.380499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:19.246 [2024-11-05 16:14:40.380507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:49:19.246 [2024-11-05 16:14:40.380515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.415479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.415621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:19.246 [2024-11-05 16:14:40.415637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.415647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.415699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.415707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:19.246 [2024-11-05 16:14:40.415715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.415722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.415809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.415824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:19.246 [2024-11-05 16:14:40.415832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.415839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.415855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.415862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:19.246 [2024-11-05 16:14:40.415870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.415877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.498036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.498216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:19.246 [2024-11-05 16:14:40.498253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.498264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:19.246 [2024-11-05 16:14:40.567124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:19.246 [2024-11-05 16:14:40.567216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:19.246 [2024-11-05 16:14:40.567294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:19.246 [2024-11-05 16:14:40.567426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:19.246 [2024-11-05 16:14:40.567485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:19.246 [2024-11-05 16:14:40.567547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:19.246 [2024-11-05 16:14:40.567609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:19.246 [2024-11-05 16:14:40.567617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:19.246 [2024-11-05 16:14:40.567625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:19.246 [2024-11-05 16:14:40.567769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.096 ms, result 0 00:49:20.192 00:49:20.192 00:49:20.192 16:14:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:49:22.778 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:49:22.778 16:14:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:22.778 [2024-11-05 16:14:43.607942] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:49:22.778 [2024-11-05 16:14:43.608070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79623 ] 00:49:22.778 [2024-11-05 16:14:43.772339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:22.778 [2024-11-05 16:14:43.866496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:22.778 [2024-11-05 16:14:44.123219] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:22.778 [2024-11-05 16:14:44.123281] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:23.041 [2024-11-05 16:14:44.282251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.041 [2024-11-05 16:14:44.282292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:23.041 [2024-11-05 16:14:44.282310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:49:23.041 [2024-11-05 16:14:44.282318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.041 [2024-11-05 16:14:44.282359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.041 [2024-11-05 16:14:44.282368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:23.041 [2024-11-05 16:14:44.282378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:49:23.041 [2024-11-05 16:14:44.282386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.041 [2024-11-05 16:14:44.282404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:23.041 [2024-11-05 16:14:44.283130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:23.041 [2024-11-05 16:14:44.283150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.041 [2024-11-05 16:14:44.283157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:23.041 [2024-11-05 16:14:44.283166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:49:23.041 [2024-11-05 16:14:44.283173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.041 [2024-11-05 16:14:44.284267] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:49:23.041 [2024-11-05 16:14:44.297135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.041 [2024-11-05 16:14:44.297267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:49:23.041 [2024-11-05 16:14:44.297284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.869 ms 00:49:23.041 [2024-11-05 16:14:44.297292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.041 [2024-11-05 16:14:44.297341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.297351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:49:23.042 [2024-11-05 16:14:44.297359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:49:23.042 [2024-11-05 16:14:44.297366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.302161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.302188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:23.042 [2024-11-05 16:14:44.302197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:49:23.042 [2024-11-05 16:14:44.302208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.302292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.302301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:23.042 [2024-11-05 16:14:44.302309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:49:23.042 [2024-11-05 16:14:44.302316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.302347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.302356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:23.042 [2024-11-05 16:14:44.302363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:49:23.042 [2024-11-05 16:14:44.302370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.302393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:23.042 [2024-11-05 16:14:44.305529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.305555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:23.042 [2024-11-05 16:14:44.305567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:49:23.042 [2024-11-05 16:14:44.305574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.305601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.305608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:23.042 [2024-11-05 16:14:44.305616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:49:23.042 [2024-11-05 16:14:44.305623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.305641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:49:23.042 [2024-11-05 16:14:44.305657] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:49:23.042 [2024-11-05 16:14:44.305690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:49:23.042 [2024-11-05 16:14:44.305706] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:49:23.042 [2024-11-05 16:14:44.305820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:23.042 [2024-11-05 16:14:44.305832] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:23.042 [2024-11-05 16:14:44.305842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:23.042 [2024-11-05 16:14:44.305851] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:23.042 [2024-11-05 16:14:44.305860] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:23.042 [2024-11-05 16:14:44.305867] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:49:23.042 [2024-11-05 16:14:44.305875] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:23.042 [2024-11-05 16:14:44.305882] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:23.042 [2024-11-05 16:14:44.305891] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:23.042 [2024-11-05 16:14:44.305899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.305906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:23.042 [2024-11-05 16:14:44.305913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:49:23.042 [2024-11-05 16:14:44.305920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.306001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.042 [2024-11-05 16:14:44.306009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:23.042 [2024-11-05 16:14:44.306016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:49:23.042 [2024-11-05 16:14:44.306023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.042 [2024-11-05 16:14:44.306123] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:23.042 [2024-11-05 16:14:44.306133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:23.042 [2024-11-05 16:14:44.306141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:23.042 [2024-11-05 16:14:44.306162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:23.042 [2024-11-05 16:14:44.306182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:23.042 [2024-11-05 16:14:44.306195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:23.042 [2024-11-05 16:14:44.306201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:49:23.042 [2024-11-05 16:14:44.306209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:23.042 [2024-11-05 16:14:44.306216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:23.042 [2024-11-05 16:14:44.306223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:49:23.042 [2024-11-05 16:14:44.306244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:23.042 [2024-11-05 16:14:44.306257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:23.042 [2024-11-05 16:14:44.306276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:23.042 [2024-11-05 16:14:44.306296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:23.042 [2024-11-05 16:14:44.306315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:23.042 [2024-11-05 16:14:44.306334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:23.042 [2024-11-05 16:14:44.306354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:23.042 [2024-11-05 16:14:44.306366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:23.042 [2024-11-05 16:14:44.306373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:49:23.042 [2024-11-05 16:14:44.306379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:23.042 [2024-11-05 16:14:44.306386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:23.042 [2024-11-05 16:14:44.306392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:49:23.042 [2024-11-05 16:14:44.306398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:23.042 [2024-11-05 16:14:44.306411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:49:23.042 [2024-11-05 16:14:44.306417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:23.042 [2024-11-05 16:14:44.306432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:23.042 [2024-11-05 16:14:44.306439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:23.042 [2024-11-05 16:14:44.306453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:23.042 [2024-11-05 16:14:44.306460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:23.042 [2024-11-05 16:14:44.306466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:23.042 [2024-11-05 16:14:44.306472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:23.042 [2024-11-05 16:14:44.306479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:23.042 [2024-11-05 16:14:44.306486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:23.042 [2024-11-05 16:14:44.306494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:23.042 [2024-11-05 16:14:44.306502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:23.042 [2024-11-05 16:14:44.306512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:49:23.042 [2024-11-05 16:14:44.306519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:49:23.042 [2024-11-05 16:14:44.306526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:49:23.042 [2024-11-05 16:14:44.306533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:49:23.043 [2024-11-05 16:14:44.306540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:49:23.043 [2024-11-05 16:14:44.306548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:49:23.043 [2024-11-05 16:14:44.306555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:49:23.043 [2024-11-05 16:14:44.306562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:49:23.043 [2024-11-05 16:14:44.306569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:49:23.043 [2024-11-05 16:14:44.306576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:49:23.043 [2024-11-05 16:14:44.306583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:49:23.043 [2024-11-05 16:14:44.306590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:49:23.043 [2024-11-05 16:14:44.306596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:49:23.043 [2024-11-05 16:14:44.306603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:49:23.043 [2024-11-05 16:14:44.306610] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:23.043 [2024-11-05 16:14:44.306618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:23.043 [2024-11-05 16:14:44.306626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:23.043 [2024-11-05 16:14:44.306633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:23.043 [2024-11-05 16:14:44.306640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:23.043 [2024-11-05 16:14:44.306647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:23.043 [2024-11-05 16:14:44.306654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.306661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:23.043 [2024-11-05 16:14:44.306669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:49:23.043 [2024-11-05 16:14:44.306676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.332191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.332320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:23.043 [2024-11-05 16:14:44.332336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.464 ms 00:49:23.043 [2024-11-05 16:14:44.332348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.332429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.332436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:23.043 [2024-11-05 16:14:44.332444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:49:23.043 [2024-11-05 16:14:44.332451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.382525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.382645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:23.043 [2024-11-05 16:14:44.382662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.025 ms 00:49:23.043 [2024-11-05 16:14:44.382671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.382712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.382723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:23.043 [2024-11-05 16:14:44.382749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:49:23.043 [2024-11-05 16:14:44.382759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.383122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.383139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:23.043 [2024-11-05 16:14:44.383148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:49:23.043 [2024-11-05 16:14:44.383157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.383288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.383299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:23.043 [2024-11-05 16:14:44.383313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:49:23.043 [2024-11-05 16:14:44.383321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.043 [2024-11-05 16:14:44.396248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.043 [2024-11-05 16:14:44.396278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:23.043 [2024-11-05 16:14:44.396288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.909 ms 00:49:23.043 [2024-11-05 16:14:44.396295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.409599] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:49:23.305 [2024-11-05 16:14:44.409704] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:49:23.305 [2024-11-05 16:14:44.409718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.409726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:49:23.305 [2024-11-05 16:14:44.409748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.338 ms 00:49:23.305 [2024-11-05 16:14:44.409756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.433939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.433970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:49:23.305 [2024-11-05 16:14:44.433981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.148 ms 00:49:23.305 [2024-11-05 16:14:44.433989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.445731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.445765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:49:23.305 [2024-11-05 16:14:44.445774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.707 ms 00:49:23.305 [2024-11-05 16:14:44.445781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.457413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.457442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:49:23.305 [2024-11-05 16:14:44.457452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.600 ms 00:49:23.305 [2024-11-05 16:14:44.457459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.458059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.458079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:23.305 [2024-11-05 16:14:44.458091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:49:23.305 [2024-11-05 16:14:44.458098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.513368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.513411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:49:23.305 [2024-11-05 16:14:44.513428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.253 ms 00:49:23.305 [2024-11-05 16:14:44.513436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.523698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:49:23.305 [2024-11-05 16:14:44.526012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.526040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:23.305 [2024-11-05 16:14:44.526051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.538 ms 00:49:23.305 [2024-11-05 16:14:44.526059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.526141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.526151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:49:23.305 [2024-11-05 16:14:44.526161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:49:23.305 [2024-11-05 16:14:44.526171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.305 [2024-11-05 16:14:44.526766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.305 [2024-11-05 16:14:44.526791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:23.305 [2024-11-05 16:14:44.526801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:49:23.306 [2024-11-05 16:14:44.526809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.306 [2024-11-05 16:14:44.526832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.306 [2024-11-05 16:14:44.526841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:23.306 [2024-11-05 16:14:44.526850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:49:23.306 [2024-11-05 16:14:44.526858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.306 [2024-11-05 16:14:44.526894] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:49:23.306 [2024-11-05 16:14:44.526905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.306 [2024-11-05 16:14:44.526913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:49:23.306 [2024-11-05 16:14:44.526922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:49:23.306 [2024-11-05 16:14:44.526930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.306 [2024-11-05 16:14:44.550327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.306 [2024-11-05 16:14:44.550359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:23.306 [2024-11-05 16:14:44.550374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.380 ms 00:49:23.306 [2024-11-05 16:14:44.550383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.306 [2024-11-05 16:14:44.550455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:23.306 [2024-11-05 16:14:44.550465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:23.306 [2024-11-05 16:14:44.550473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:49:23.306 [2024-11-05 16:14:44.550480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:23.306 [2024-11-05 16:14:44.551374] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.726 ms, result 0 00:49:24.689  [2024-11-05T16:14:46.992Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-05T16:14:47.937Z] Copying: 32/1024 [MB] (13 MBps) [2024-11-05T16:14:48.880Z] Copying: 43544/1048576 [kB] (10096 kBps) [2024-11-05T16:14:49.821Z] Copying: 55/1024 [MB] (13 MBps) [2024-11-05T16:14:50.759Z] Copying: 69/1024 [MB] (14 MBps) [2024-11-05T16:14:52.145Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-05T16:14:53.087Z] Copying: 99/1024 [MB] (17 MBps) [2024-11-05T16:14:54.027Z] Copying: 111856/1048576 [kB] (10148 kBps) [2024-11-05T16:14:54.968Z] Copying: 121420/1048576 [kB] (9564 kBps) [2024-11-05T16:14:55.909Z] Copying: 131200/1048576 [kB] (9780 kBps) [2024-11-05T16:14:56.850Z] Copying: 140664/1048576 [kB] (9464 kBps) [2024-11-05T16:14:57.942Z] Copying: 147/1024 [MB] (10 MBps) [2024-11-05T16:14:58.887Z] Copying: 161384/1048576 [kB] (10024 kBps) [2024-11-05T16:14:59.832Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-05T16:15:00.774Z] Copying: 181280/1048576 [kB] (9628 kBps) [2024-11-05T16:15:02.163Z] Copying: 190936/1048576 [kB] (9656 kBps) [2024-11-05T16:15:02.735Z] Copying: 200608/1048576 [kB] (9672 kBps) [2024-11-05T16:15:04.124Z] Copying: 216/1024 [MB] (20 MBps) [2024-11-05T16:15:05.068Z] Copying: 231/1024 [MB] (15 MBps) [2024-11-05T16:15:06.013Z] Copying: 242/1024 [MB] (11 MBps) [2024-11-05T16:15:06.960Z] Copying: 258004/1048576 [kB] (9380 kBps) [2024-11-05T16:15:07.901Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-05T16:15:08.848Z] Copying: 278552/1048576 [kB] (9700 kBps) [2024-11-05T16:15:09.794Z] Copying: 287/1024 [MB] (15 MBps) [2024-11-05T16:15:10.778Z] Copying: 306/1024 [MB] (19 MBps) [2024-11-05T16:15:11.723Z] Copying: 319/1024 [MB] (12 MBps) [2024-11-05T16:15:13.161Z] Copying: 337/1024 [MB] (18 MBps) [2024-11-05T16:15:13.733Z] Copying: 355816/1048576 [kB] (9972 kBps) [2024-11-05T16:15:15.121Z] Copying: 366/1024 [MB] (18 MBps) [2024-11-05T16:15:16.066Z] Copying: 380/1024 [MB] (14 MBps) [2024-11-05T16:15:17.010Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-05T16:15:17.953Z] Copying: 408/1024 [MB] (16 MBps) [2024-11-05T16:15:18.892Z] Copying: 423/1024 [MB] (14 MBps) [2024-11-05T16:15:19.835Z] Copying: 446/1024 [MB] (22 MBps) [2024-11-05T16:15:20.781Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-05T16:15:21.768Z] Copying: 474/1024 [MB] (17 MBps) [2024-11-05T16:15:23.163Z] Copying: 487/1024 [MB] (13 MBps) [2024-11-05T16:15:23.736Z] Copying: 509552/1048576 [kB] (10068 kBps) [2024-11-05T16:15:25.122Z] Copying: 508/1024 [MB] (11 MBps) [2024-11-05T16:15:26.064Z] Copying: 522/1024 [MB] (14 MBps) [2024-11-05T16:15:27.006Z] Copying: 541/1024 [MB] (18 MBps) [2024-11-05T16:15:27.951Z] Copying: 556/1024 [MB] (14 MBps) [2024-11-05T16:15:28.898Z] Copying: 579228/1048576 [kB] (9668 kBps) [2024-11-05T16:15:29.848Z] Copying: 588728/1048576 [kB] (9500 kBps) [2024-11-05T16:15:30.793Z] Copying: 588/1024 [MB] (13 MBps) [2024-11-05T16:15:31.738Z] Copying: 612044/1048576 [kB] (9492 kBps) [2024-11-05T16:15:33.121Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-05T16:15:34.065Z] Copying: 619/1024 [MB] (11 MBps) [2024-11-05T16:15:35.011Z] Copying: 631/1024 [MB] (12 MBps) [2024-11-05T16:15:35.957Z] Copying: 656448/1048576 [kB] (9648 kBps) [2024-11-05T16:15:36.900Z] Copying: 665888/1048576 [kB] (9440 kBps) [2024-11-05T16:15:37.845Z] Copying: 675588/1048576 [kB] (9700 kBps) [2024-11-05T16:15:38.788Z] Copying: 685184/1048576 [kB] (9596 kBps) [2024-11-05T16:15:39.734Z] Copying: 694720/1048576 [kB] (9536 kBps) [2024-11-05T16:15:41.119Z] Copying: 704416/1048576 [kB] (9696 kBps) [2024-11-05T16:15:42.063Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-05T16:15:43.005Z] Copying: 719/1024 [MB] (19 MBps) [2024-11-05T16:15:43.945Z] Copying: 738/1024 [MB] (18 MBps) [2024-11-05T16:15:44.890Z] Copying: 770/1024 [MB] (32 MBps) [2024-11-05T16:15:45.836Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-05T16:15:46.783Z] Copying: 800/1024 [MB] (19 MBps) [2024-11-05T16:15:47.757Z] Copying: 817/1024 [MB] (16 MBps) [2024-11-05T16:15:49.146Z] Copying: 832/1024 [MB] (15 MBps) [2024-11-05T16:15:50.090Z] Copying: 846/1024 [MB] (13 MBps) [2024-11-05T16:15:51.033Z] Copying: 861/1024 [MB] (15 MBps) [2024-11-05T16:15:51.977Z] Copying: 875/1024 [MB] (13 MBps) [2024-11-05T16:15:52.923Z] Copying: 894/1024 [MB] (19 MBps) [2024-11-05T16:15:53.867Z] Copying: 906/1024 [MB] (12 MBps) [2024-11-05T16:15:54.812Z] Copying: 923/1024 [MB] (16 MBps) [2024-11-05T16:15:55.757Z] Copying: 942/1024 [MB] (19 MBps) [2024-11-05T16:15:57.143Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-05T16:15:58.086Z] Copying: 969/1024 [MB] (13 MBps) [2024-11-05T16:15:59.032Z] Copying: 981/1024 [MB] (11 MBps) [2024-11-05T16:15:59.975Z] Copying: 1004/1024 [MB] (22 MBps) [2024-11-05T16:16:00.236Z] Copying: 1019/1024 [MB] (14 MBps) [2024-11-05T16:16:00.502Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-05 16:16:00.293453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.140 [2024-11-05 16:16:00.293564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:39.140 [2024-11-05 16:16:00.293591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:39.140 [2024-11-05 16:16:00.293604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.140 [2024-11-05 16:16:00.293638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:39.140 [2024-11-05 16:16:00.298574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.140 [2024-11-05 16:16:00.298625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:39.140 [2024-11-05 16:16:00.298651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.907 ms 00:50:39.140 [2024-11-05 16:16:00.298663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.140 [2024-11-05 16:16:00.299021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.140 [2024-11-05 16:16:00.299037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:39.140 [2024-11-05 16:16:00.299051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:50:39.140 [2024-11-05 16:16:00.299063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.140 [2024-11-05 16:16:00.303879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.140 [2024-11-05 16:16:00.304046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:39.140 [2024-11-05 16:16:00.304064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.797 ms 00:50:39.140 [2024-11-05 16:16:00.304082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.140 [2024-11-05 16:16:00.310462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.140 [2024-11-05 16:16:00.310498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:39.140 [2024-11-05 16:16:00.310511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.352 ms 00:50:39.140 [2024-11-05 16:16:00.310520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.140 [2024-11-05 16:16:00.338267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.140 [2024-11-05 16:16:00.338452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:39.140 [2024-11-05 16:16:00.338476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.661 ms 00:50:39.140 [2024-11-05 16:16:00.338484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.140 [2024-11-05 16:16:00.354501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.141 [2024-11-05 16:16:00.354544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:39.141 [2024-11-05 16:16:00.354558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.973 ms 00:50:39.141 [2024-11-05 16:16:00.354569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.141 [2024-11-05 16:16:00.359439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.141 [2024-11-05 16:16:00.359479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:39.141 [2024-11-05 16:16:00.359491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.802 ms 00:50:39.141 [2024-11-05 16:16:00.359500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.141 [2024-11-05 16:16:00.386246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.141 [2024-11-05 16:16:00.386289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:39.141 [2024-11-05 16:16:00.386301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.727 ms 00:50:39.141 [2024-11-05 16:16:00.386309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.141 [2024-11-05 16:16:00.411747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.141 [2024-11-05 16:16:00.411802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:39.141 [2024-11-05 16:16:00.411813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.387 ms 00:50:39.141 [2024-11-05 16:16:00.411821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.141 [2024-11-05 16:16:00.437043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.141 [2024-11-05 16:16:00.437082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:39.141 [2024-11-05 16:16:00.437094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.170 ms 00:50:39.141 [2024-11-05 16:16:00.437102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.141 [2024-11-05 16:16:00.462432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.141 [2024-11-05 16:16:00.462471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:39.141 [2024-11-05 16:16:00.462482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.253 ms 00:50:39.141 [2024-11-05 16:16:00.462490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.141 [2024-11-05 16:16:00.462541] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:39.141 [2024-11-05 16:16:00.462567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:50:39.141 [2024-11-05 16:16:00.462584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:50:39.141 [2024-11-05 16:16:00.462594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:39.141 [2024-11-05 16:16:00.462960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.462967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.462974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.462982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.462989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:39.142 [2024-11-05 16:16:00.463416] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:39.142 [2024-11-05 16:16:00.463425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8db061b2-73ec-45fb-89ba-dce50d3beacb 00:50:39.142 [2024-11-05 16:16:00.463433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:50:39.142 [2024-11-05 16:16:00.463441] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:39.142 [2024-11-05 16:16:00.463448] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:39.142 [2024-11-05 16:16:00.463457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:39.142 [2024-11-05 16:16:00.463464] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:39.142 [2024-11-05 16:16:00.463472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:39.142 [2024-11-05 16:16:00.463487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:39.142 [2024-11-05 16:16:00.463494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:39.142 [2024-11-05 16:16:00.463500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:39.143 [2024-11-05 16:16:00.463507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.143 [2024-11-05 16:16:00.463519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:39.143 [2024-11-05 16:16:00.463528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:50:39.143 [2024-11-05 16:16:00.463538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.143 [2024-11-05 16:16:00.477170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.143 [2024-11-05 16:16:00.477204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:39.143 [2024-11-05 16:16:00.477234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.612 ms 00:50:39.143 [2024-11-05 16:16:00.477243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.143 [2024-11-05 16:16:00.477642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:39.143 [2024-11-05 16:16:00.477655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:39.143 [2024-11-05 16:16:00.477665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:50:39.143 [2024-11-05 16:16:00.477672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.403 [2024-11-05 16:16:00.514607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.403 [2024-11-05 16:16:00.514661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:39.403 [2024-11-05 16:16:00.514675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.403 [2024-11-05 16:16:00.514685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.403 [2024-11-05 16:16:00.514776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.403 [2024-11-05 16:16:00.514794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:39.403 [2024-11-05 16:16:00.514805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.514814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.514904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.514916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:39.404 [2024-11-05 16:16:00.514951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.514960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.514979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.514988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:39.404 [2024-11-05 16:16:00.515001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.515009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.598797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.598856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:39.404 [2024-11-05 16:16:00.598869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.598877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.668823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.668883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:39.404 [2024-11-05 16:16:00.668905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.668913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.668978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.668988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:39.404 [2024-11-05 16:16:00.668997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.669006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.669065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.669075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:39.404 [2024-11-05 16:16:00.669084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.669095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.669201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.669212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:39.404 [2024-11-05 16:16:00.669222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.669230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.669265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.669274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:39.404 [2024-11-05 16:16:00.669284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.669292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.669337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.669345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:39.404 [2024-11-05 16:16:00.669354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.669362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.669408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:39.404 [2024-11-05 16:16:00.669419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:39.404 [2024-11-05 16:16:00.669428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:39.404 [2024-11-05 16:16:00.669439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:39.404 [2024-11-05 16:16:00.669573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.095 ms, result 0 00:50:40.347 00:50:40.347 00:50:40.347 16:16:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:50:42.263 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:50:42.263 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:50:42.263 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:50:42.263 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:42.263 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:50:42.524 Process with pid 77579 is not found 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77579 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 77579 ']' 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 77579 00:50:42.524 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77579) - No such process 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 77579 is not found' 00:50:42.524 16:16:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:50:42.784 Remove shared memory files 00:50:42.784 16:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:50:42.784 16:16:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:50:42.784 16:16:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:50:42.784 16:16:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:50:42.784 16:16:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:50:42.785 16:16:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:50:42.785 16:16:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:50:42.785 ************************************ 00:50:42.785 END TEST ftl_dirty_shutdown 00:50:42.785 ************************************ 00:50:42.785 00:50:42.785 real 4m32.741s 00:50:42.785 user 5m2.045s 00:50:42.785 sys 0m26.886s 00:50:42.785 16:16:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:50:42.785 16:16:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:43.045 16:16:04 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:50:43.045 16:16:04 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:50:43.045 16:16:04 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:50:43.045 16:16:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:50:43.045 ************************************ 00:50:43.045 START TEST ftl_upgrade_shutdown 00:50:43.045 ************************************ 00:50:43.045 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:50:43.045 * Looking for test storage... 00:50:43.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:50:43.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:43.046 --rc genhtml_branch_coverage=1 00:50:43.046 --rc genhtml_function_coverage=1 00:50:43.046 --rc genhtml_legend=1 00:50:43.046 --rc geninfo_all_blocks=1 00:50:43.046 --rc geninfo_unexecuted_blocks=1 00:50:43.046 00:50:43.046 ' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:50:43.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:43.046 --rc genhtml_branch_coverage=1 00:50:43.046 --rc genhtml_function_coverage=1 00:50:43.046 --rc genhtml_legend=1 00:50:43.046 --rc geninfo_all_blocks=1 00:50:43.046 --rc geninfo_unexecuted_blocks=1 00:50:43.046 00:50:43.046 ' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:50:43.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:43.046 --rc genhtml_branch_coverage=1 00:50:43.046 --rc genhtml_function_coverage=1 00:50:43.046 --rc genhtml_legend=1 00:50:43.046 --rc geninfo_all_blocks=1 00:50:43.046 --rc geninfo_unexecuted_blocks=1 00:50:43.046 00:50:43.046 ' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:50:43.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:43.046 --rc genhtml_branch_coverage=1 00:50:43.046 --rc genhtml_function_coverage=1 00:50:43.046 --rc genhtml_legend=1 00:50:43.046 --rc geninfo_all_blocks=1 00:50:43.046 --rc geninfo_unexecuted_blocks=1 00:50:43.046 00:50:43.046 ' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80503 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80503 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80503 ']' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:50:43.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:43.046 16:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:50:43.308 [2024-11-05 16:16:04.448256] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:50:43.308 [2024-11-05 16:16:04.448530] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80503 ] 00:50:43.308 [2024-11-05 16:16:04.614478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:43.570 [2024-11-05 16:16:04.708051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:50:44.143 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:50:44.406 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:50:44.667 { 00:50:44.667 "name": "basen1", 00:50:44.667 "aliases": [ 00:50:44.667 "45560d69-0adb-4ff0-a534-6364bfb3da23" 00:50:44.667 ], 00:50:44.667 "product_name": "NVMe disk", 00:50:44.667 "block_size": 4096, 00:50:44.667 "num_blocks": 1310720, 00:50:44.667 "uuid": "45560d69-0adb-4ff0-a534-6364bfb3da23", 00:50:44.667 "numa_id": -1, 00:50:44.667 "assigned_rate_limits": { 00:50:44.667 "rw_ios_per_sec": 0, 00:50:44.667 "rw_mbytes_per_sec": 0, 00:50:44.667 "r_mbytes_per_sec": 0, 00:50:44.667 "w_mbytes_per_sec": 0 00:50:44.667 }, 00:50:44.667 "claimed": true, 00:50:44.667 "claim_type": "read_many_write_one", 00:50:44.667 "zoned": false, 00:50:44.667 "supported_io_types": { 00:50:44.667 "read": true, 00:50:44.667 "write": true, 00:50:44.667 "unmap": true, 00:50:44.667 "flush": true, 00:50:44.667 "reset": true, 00:50:44.667 "nvme_admin": true, 00:50:44.667 "nvme_io": true, 00:50:44.667 "nvme_io_md": false, 00:50:44.667 "write_zeroes": true, 00:50:44.667 "zcopy": false, 00:50:44.667 "get_zone_info": false, 00:50:44.667 "zone_management": false, 00:50:44.667 "zone_append": false, 00:50:44.667 "compare": true, 00:50:44.667 "compare_and_write": false, 00:50:44.667 "abort": true, 00:50:44.667 "seek_hole": false, 00:50:44.667 "seek_data": false, 00:50:44.667 "copy": true, 00:50:44.667 "nvme_iov_md": false 00:50:44.667 }, 00:50:44.667 "driver_specific": { 00:50:44.667 "nvme": [ 00:50:44.667 { 00:50:44.667 "pci_address": "0000:00:11.0", 00:50:44.667 "trid": { 00:50:44.667 "trtype": "PCIe", 00:50:44.667 "traddr": "0000:00:11.0" 00:50:44.667 }, 00:50:44.667 "ctrlr_data": { 00:50:44.667 "cntlid": 0, 00:50:44.667 "vendor_id": "0x1b36", 00:50:44.667 "model_number": "QEMU NVMe Ctrl", 00:50:44.667 "serial_number": "12341", 00:50:44.667 "firmware_revision": "8.0.0", 00:50:44.667 "subnqn": "nqn.2019-08.org.qemu:12341", 00:50:44.667 "oacs": { 00:50:44.667 "security": 0, 00:50:44.667 "format": 1, 00:50:44.667 "firmware": 0, 00:50:44.667 "ns_manage": 1 00:50:44.667 }, 00:50:44.667 "multi_ctrlr": false, 00:50:44.667 "ana_reporting": false 00:50:44.667 }, 00:50:44.667 "vs": { 00:50:44.667 "nvme_version": "1.4" 00:50:44.667 }, 00:50:44.667 "ns_data": { 00:50:44.667 "id": 1, 00:50:44.667 "can_share": false 00:50:44.667 } 00:50:44.667 } 00:50:44.667 ], 00:50:44.667 "mp_policy": "active_passive" 00:50:44.667 } 00:50:44.667 } 00:50:44.667 ]' 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:50:44.667 16:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:50:44.928 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e 00:50:44.928 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:50:44.928 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48f5e9cc-6b3f-4ec8-bdd8-56d0da4a883e 00:50:44.928 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:50:45.187 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=63d160a7-d1c9-4c8d-a279-b0b667483969 00:50:45.187 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 63d160a7-d1c9-4c8d-a279-b0b667483969 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5a4ec4da-f369-4054-81e9-7d614812693b 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5a4ec4da-f369-4054-81e9-7d614812693b ]] 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5a4ec4da-f369-4054-81e9-7d614812693b 5120 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5a4ec4da-f369-4054-81e9-7d614812693b 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5a4ec4da-f369-4054-81e9-7d614812693b 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=5a4ec4da-f369-4054-81e9-7d614812693b 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:50:45.448 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5a4ec4da-f369-4054-81e9-7d614812693b 00:50:45.710 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:50:45.710 { 00:50:45.710 "name": "5a4ec4da-f369-4054-81e9-7d614812693b", 00:50:45.710 "aliases": [ 00:50:45.710 "lvs/basen1p0" 00:50:45.710 ], 00:50:45.710 "product_name": "Logical Volume", 00:50:45.710 "block_size": 4096, 00:50:45.710 "num_blocks": 5242880, 00:50:45.710 "uuid": "5a4ec4da-f369-4054-81e9-7d614812693b", 00:50:45.710 "assigned_rate_limits": { 00:50:45.710 "rw_ios_per_sec": 0, 00:50:45.710 "rw_mbytes_per_sec": 0, 00:50:45.710 "r_mbytes_per_sec": 0, 00:50:45.711 "w_mbytes_per_sec": 0 00:50:45.711 }, 00:50:45.711 "claimed": false, 00:50:45.711 "zoned": false, 00:50:45.711 "supported_io_types": { 00:50:45.711 "read": true, 00:50:45.711 "write": true, 00:50:45.711 "unmap": true, 00:50:45.711 "flush": false, 00:50:45.711 "reset": true, 00:50:45.711 "nvme_admin": false, 00:50:45.711 "nvme_io": false, 00:50:45.711 "nvme_io_md": false, 00:50:45.711 "write_zeroes": true, 00:50:45.711 "zcopy": false, 00:50:45.711 "get_zone_info": false, 00:50:45.711 "zone_management": false, 00:50:45.711 "zone_append": false, 00:50:45.711 "compare": false, 00:50:45.711 "compare_and_write": false, 00:50:45.711 "abort": false, 00:50:45.711 "seek_hole": true, 00:50:45.711 "seek_data": true, 00:50:45.711 "copy": false, 00:50:45.711 "nvme_iov_md": false 00:50:45.711 }, 00:50:45.711 "driver_specific": { 00:50:45.711 "lvol": { 00:50:45.711 "lvol_store_uuid": "63d160a7-d1c9-4c8d-a279-b0b667483969", 00:50:45.711 "base_bdev": "basen1", 00:50:45.711 "thin_provision": true, 00:50:45.711 "num_allocated_clusters": 0, 00:50:45.711 "snapshot": false, 00:50:45.711 "clone": false, 00:50:45.711 "esnap_clone": false 00:50:45.711 } 00:50:45.711 } 00:50:45.711 } 00:50:45.711 ]' 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:50:45.711 16:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:50:45.973 16:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:50:45.973 16:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:50:45.973 16:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:50:46.234 16:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:50:46.234 16:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:50:46.234 16:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5a4ec4da-f369-4054-81e9-7d614812693b -c cachen1p0 --l2p_dram_limit 2 00:50:46.497 [2024-11-05 16:16:07.682682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.682729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:50:46.497 [2024-11-05 16:16:07.682758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:50:46.497 [2024-11-05 16:16:07.682767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.682820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.682830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:50:46.497 [2024-11-05 16:16:07.682840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:50:46.497 [2024-11-05 16:16:07.682848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.682868] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:50:46.497 [2024-11-05 16:16:07.683612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:50:46.497 [2024-11-05 16:16:07.683636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.683644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:50:46.497 [2024-11-05 16:16:07.683653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.769 ms 00:50:46.497 [2024-11-05 16:16:07.683661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.683695] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8c11d26c-8cb0-4981-b3d3-d10ebe806ca0 00:50:46.497 [2024-11-05 16:16:07.684783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.684816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:50:46.497 [2024-11-05 16:16:07.684826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:50:46.497 [2024-11-05 16:16:07.684836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.689957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.689990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:50:46.497 [2024-11-05 16:16:07.690002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.050 ms 00:50:46.497 [2024-11-05 16:16:07.690011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.690048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.690058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:50:46.497 [2024-11-05 16:16:07.690066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:50:46.497 [2024-11-05 16:16:07.690077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.690120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.690132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:50:46.497 [2024-11-05 16:16:07.690140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:50:46.497 [2024-11-05 16:16:07.690153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.690174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:50:46.497 [2024-11-05 16:16:07.693672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.693705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:50:46.497 [2024-11-05 16:16:07.693719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.501 ms 00:50:46.497 [2024-11-05 16:16:07.693727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.693770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.693780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:50:46.497 [2024-11-05 16:16:07.693790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:50:46.497 [2024-11-05 16:16:07.693797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.693823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:50:46.497 [2024-11-05 16:16:07.693960] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:50:46.497 [2024-11-05 16:16:07.693976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:50:46.497 [2024-11-05 16:16:07.693987] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:50:46.497 [2024-11-05 16:16:07.693998] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:50:46.497 [2024-11-05 16:16:07.694008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:50:46.497 [2024-11-05 16:16:07.694017] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:50:46.497 [2024-11-05 16:16:07.694024] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:50:46.497 [2024-11-05 16:16:07.694035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:50:46.497 [2024-11-05 16:16:07.694042] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:50:46.497 [2024-11-05 16:16:07.694051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.694059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:50:46.497 [2024-11-05 16:16:07.694068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:50:46.497 [2024-11-05 16:16:07.694075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.694159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.497 [2024-11-05 16:16:07.694169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:50:46.497 [2024-11-05 16:16:07.694180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:50:46.497 [2024-11-05 16:16:07.694192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.497 [2024-11-05 16:16:07.694325] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:50:46.497 [2024-11-05 16:16:07.694336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:50:46.497 [2024-11-05 16:16:07.694346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:50:46.497 [2024-11-05 16:16:07.694354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.497 [2024-11-05 16:16:07.694363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:50:46.497 [2024-11-05 16:16:07.694369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:50:46.497 [2024-11-05 16:16:07.694378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:50:46.497 [2024-11-05 16:16:07.694386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:50:46.497 [2024-11-05 16:16:07.694394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:50:46.497 [2024-11-05 16:16:07.694401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.497 [2024-11-05 16:16:07.694409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:50:46.497 [2024-11-05 16:16:07.694417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:50:46.497 [2024-11-05 16:16:07.694425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.497 [2024-11-05 16:16:07.694432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:50:46.497 [2024-11-05 16:16:07.694440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:50:46.498 [2024-11-05 16:16:07.694447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:50:46.498 [2024-11-05 16:16:07.694464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:50:46.498 [2024-11-05 16:16:07.694473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:50:46.498 [2024-11-05 16:16:07.694490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:50:46.498 [2024-11-05 16:16:07.694497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:50:46.498 [2024-11-05 16:16:07.694511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:50:46.498 [2024-11-05 16:16:07.694519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:50:46.498 [2024-11-05 16:16:07.694534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:50:46.498 [2024-11-05 16:16:07.694540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:50:46.498 [2024-11-05 16:16:07.694554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:50:46.498 [2024-11-05 16:16:07.694562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:50:46.498 [2024-11-05 16:16:07.694578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:50:46.498 [2024-11-05 16:16:07.694585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:50:46.498 [2024-11-05 16:16:07.694600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:50:46.498 [2024-11-05 16:16:07.694622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:50:46.498 [2024-11-05 16:16:07.694644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:50:46.498 [2024-11-05 16:16:07.694652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694658] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:50:46.498 [2024-11-05 16:16:07.694667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:50:46.498 [2024-11-05 16:16:07.694674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:46.498 [2024-11-05 16:16:07.694691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:50:46.498 [2024-11-05 16:16:07.694701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:50:46.498 [2024-11-05 16:16:07.694708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:50:46.498 [2024-11-05 16:16:07.694716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:50:46.498 [2024-11-05 16:16:07.694722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:50:46.498 [2024-11-05 16:16:07.694731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:50:46.498 [2024-11-05 16:16:07.694975] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:50:46.498 [2024-11-05 16:16:07.695015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:50:46.498 [2024-11-05 16:16:07.695079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:50:46.498 [2024-11-05 16:16:07.695168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:50:46.498 [2024-11-05 16:16:07.695197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:50:46.498 [2024-11-05 16:16:07.695227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:50:46.498 [2024-11-05 16:16:07.695429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:50:46.498 [2024-11-05 16:16:07.695640] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:50:46.498 [2024-11-05 16:16:07.695672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:46.498 [2024-11-05 16:16:07.695731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:50:46.498 [2024-11-05 16:16:07.695773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:50:46.498 [2024-11-05 16:16:07.695804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:50:46.498 [2024-11-05 16:16:07.695913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:46.498 [2024-11-05 16:16:07.695936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:50:46.498 [2024-11-05 16:16:07.695957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.667 ms 00:50:46.498 [2024-11-05 16:16:07.696008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:46.498 [2024-11-05 16:16:07.696066] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:50:46.498 [2024-11-05 16:16:07.696104] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:50:49.800 [2024-11-05 16:16:11.157479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:49.800 [2024-11-05 16:16:11.157698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:50:49.800 [2024-11-05 16:16:11.157720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3461.399 ms 00:50:49.800 [2024-11-05 16:16:11.157731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.182881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.182924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:50:50.062 [2024-11-05 16:16:11.182937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.939 ms 00:50:50.062 [2024-11-05 16:16:11.182946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.183014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.183026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:50:50.062 [2024-11-05 16:16:11.183034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:50:50.062 [2024-11-05 16:16:11.183045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.213322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.213371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:50:50.062 [2024-11-05 16:16:11.213384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.241 ms 00:50:50.062 [2024-11-05 16:16:11.213393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.213422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.213435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:50:50.062 [2024-11-05 16:16:11.213444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:50:50.062 [2024-11-05 16:16:11.213453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.213816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.213834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:50:50.062 [2024-11-05 16:16:11.213843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:50:50.062 [2024-11-05 16:16:11.213852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.213895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.213905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:50:50.062 [2024-11-05 16:16:11.213915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:50:50.062 [2024-11-05 16:16:11.213926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.227964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.228099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:50:50.062 [2024-11-05 16:16:11.228116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.021 ms 00:50:50.062 [2024-11-05 16:16:11.228125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.239402] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:50:50.062 [2024-11-05 16:16:11.240314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.240341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:50:50.062 [2024-11-05 16:16:11.240353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.117 ms 00:50:50.062 [2024-11-05 16:16:11.240361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.274892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.274933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:50:50.062 [2024-11-05 16:16:11.274948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.503 ms 00:50:50.062 [2024-11-05 16:16:11.274957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.062 [2024-11-05 16:16:11.275042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.062 [2024-11-05 16:16:11.275055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:50:50.063 [2024-11-05 16:16:11.275067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:50:50.063 [2024-11-05 16:16:11.275075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.063 [2024-11-05 16:16:11.298198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.063 [2024-11-05 16:16:11.298340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:50:50.063 [2024-11-05 16:16:11.298364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.075 ms 00:50:50.063 [2024-11-05 16:16:11.298372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.063 [2024-11-05 16:16:11.321942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.063 [2024-11-05 16:16:11.321986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:50:50.063 [2024-11-05 16:16:11.322000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.529 ms 00:50:50.063 [2024-11-05 16:16:11.322009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.063 [2024-11-05 16:16:11.322584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.063 [2024-11-05 16:16:11.322601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:50:50.063 [2024-11-05 16:16:11.322611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.539 ms 00:50:50.063 [2024-11-05 16:16:11.322619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.063 [2024-11-05 16:16:11.398504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.063 [2024-11-05 16:16:11.398548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:50:50.063 [2024-11-05 16:16:11.398567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.846 ms 00:50:50.063 [2024-11-05 16:16:11.398575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.063 [2024-11-05 16:16:11.423373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.063 [2024-11-05 16:16:11.423425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:50:50.063 [2024-11-05 16:16:11.423445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.720 ms 00:50:50.063 [2024-11-05 16:16:11.423454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.324 [2024-11-05 16:16:11.447971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.324 [2024-11-05 16:16:11.448006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:50:50.324 [2024-11-05 16:16:11.448019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.476 ms 00:50:50.324 [2024-11-05 16:16:11.448026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.324 [2024-11-05 16:16:11.473417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.324 [2024-11-05 16:16:11.473555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:50:50.324 [2024-11-05 16:16:11.473576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.346 ms 00:50:50.324 [2024-11-05 16:16:11.473584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.324 [2024-11-05 16:16:11.473624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.324 [2024-11-05 16:16:11.473633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:50:50.324 [2024-11-05 16:16:11.473645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:50:50.324 [2024-11-05 16:16:11.473653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.324 [2024-11-05 16:16:11.473758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:50.324 [2024-11-05 16:16:11.473771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:50:50.324 [2024-11-05 16:16:11.473783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:50:50.324 [2024-11-05 16:16:11.473790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:50.324 [2024-11-05 16:16:11.474768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3791.532 ms, result 0 00:50:50.324 { 00:50:50.324 "name": "ftl", 00:50:50.324 "uuid": "8c11d26c-8cb0-4981-b3d3-d10ebe806ca0" 00:50:50.324 } 00:50:50.324 16:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:50:50.324 [2024-11-05 16:16:11.682099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:50.587 16:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:50:50.587 16:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:50:50.848 [2024-11-05 16:16:12.090524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:50:50.848 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:50:51.110 [2024-11-05 16:16:12.294943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:50:51.110 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:50:51.372 Fill FTL, iteration 1 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80626 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80626 /var/tmp/spdk.tgt.sock 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80626 ']' 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:50:51.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:51.372 16:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:50:51.372 [2024-11-05 16:16:12.731196] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:50:51.372 [2024-11-05 16:16:12.731307] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80626 ] 00:50:51.635 [2024-11-05 16:16:12.888967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:51.635 [2024-11-05 16:16:12.986561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:52.206 16:16:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:50:52.206 16:16:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:50:52.206 16:16:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:50:52.467 ftln1 00:50:52.467 16:16:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:50:52.467 16:16:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80626 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80626 ']' 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80626 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80626 00:50:52.727 killing process with pid 80626 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80626' 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80626 00:50:52.727 16:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80626 00:50:54.641 16:16:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:50:54.641 16:16:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:50:54.641 [2024-11-05 16:16:15.574205] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:50:54.641 [2024-11-05 16:16:15.574331] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80670 ] 00:50:54.641 [2024-11-05 16:16:15.736570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:54.641 [2024-11-05 16:16:15.837516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:56.064  [2024-11-05T16:16:18.428Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-05T16:16:19.370Z] Copying: 394/1024 [MB] (210 MBps) [2024-11-05T16:16:20.310Z] Copying: 618/1024 [MB] (224 MBps) [2024-11-05T16:16:21.248Z] Copying: 854/1024 [MB] (236 MBps) [2024-11-05T16:16:21.815Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:51:00.453 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:51:00.453 Calculate MD5 checksum, iteration 1 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:51:00.453 16:16:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:00.453 [2024-11-05 16:16:21.731327] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:00.453 [2024-11-05 16:16:21.732256] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80734 ] 00:51:00.712 [2024-11-05 16:16:21.898176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:00.712 [2024-11-05 16:16:21.992863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:02.087  [2024-11-05T16:16:24.016Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-05T16:16:24.583Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:51:03.221 00:51:03.221 16:16:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:51:03.221 16:16:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:51:05.760 Fill FTL, iteration 2 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=07f5a084729493410d3927ea70f8b94e 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:51:05.760 16:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:51:05.760 [2024-11-05 16:16:26.617105] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:05.760 [2024-11-05 16:16:26.617222] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80792 ] 00:51:05.760 [2024-11-05 16:16:26.774599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:05.760 [2024-11-05 16:16:26.874192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:07.147  [2024-11-05T16:16:29.450Z] Copying: 193/1024 [MB] (193 MBps) [2024-11-05T16:16:30.390Z] Copying: 390/1024 [MB] (197 MBps) [2024-11-05T16:16:31.325Z] Copying: 579/1024 [MB] (189 MBps) [2024-11-05T16:16:32.260Z] Copying: 831/1024 [MB] (252 MBps) [2024-11-05T16:16:32.831Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:51:11.469 00:51:11.469 Calculate MD5 checksum, iteration 2 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:51:11.469 16:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:51:11.469 [2024-11-05 16:16:32.652559] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:11.469 [2024-11-05 16:16:32.652677] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80861 ] 00:51:11.469 [2024-11-05 16:16:32.805248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:11.728 [2024-11-05 16:16:32.903811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:13.110  [2024-11-05T16:16:35.041Z] Copying: 653/1024 [MB] (653 MBps) [2024-11-05T16:16:35.973Z] Copying: 1024/1024 [MB] (average 666 MBps) 00:51:14.611 00:51:14.611 16:16:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:51:14.611 16:16:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:51:17.140 16:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:51:17.140 16:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b7884b304b82c18e283a88db93c3f9b8 00:51:17.140 16:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:51:17.140 16:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:51:17.140 16:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:51:17.140 [2024-11-05 16:16:38.161006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.140 [2024-11-05 16:16:38.161049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:17.140 [2024-11-05 16:16:38.161061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:51:17.140 [2024-11-05 16:16:38.161068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.140 [2024-11-05 16:16:38.161087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.140 [2024-11-05 16:16:38.161094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:17.140 [2024-11-05 16:16:38.161101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:17.140 [2024-11-05 16:16:38.161109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.140 [2024-11-05 16:16:38.161125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.140 [2024-11-05 16:16:38.161132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:17.140 [2024-11-05 16:16:38.161138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:17.140 [2024-11-05 16:16:38.161144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.140 [2024-11-05 16:16:38.161193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.179 ms, result 0 00:51:17.140 true 00:51:17.140 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:17.140 { 00:51:17.140 "name": "ftl", 00:51:17.140 "properties": [ 00:51:17.140 { 00:51:17.140 "name": "superblock_version", 00:51:17.140 "value": 5, 00:51:17.140 "read-only": true 00:51:17.140 }, 00:51:17.140 { 00:51:17.140 "name": "base_device", 00:51:17.140 "bands": [ 00:51:17.140 { 00:51:17.140 "id": 0, 00:51:17.140 "state": "FREE", 00:51:17.140 "validity": 0.0 00:51:17.140 }, 00:51:17.140 { 00:51:17.140 "id": 1, 00:51:17.140 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 2, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 3, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 4, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 5, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 6, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 7, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 8, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 9, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 10, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 11, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 12, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 13, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 14, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 15, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 16, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 17, 00:51:17.141 "state": "FREE", 00:51:17.141 "validity": 0.0 00:51:17.141 } 00:51:17.141 ], 00:51:17.141 "read-only": true 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "name": "cache_device", 00:51:17.141 "type": "bdev", 00:51:17.141 "chunks": [ 00:51:17.141 { 00:51:17.141 "id": 0, 00:51:17.141 "state": "INACTIVE", 00:51:17.141 "utilization": 0.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 1, 00:51:17.141 "state": "CLOSED", 00:51:17.141 "utilization": 1.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 2, 00:51:17.141 "state": "CLOSED", 00:51:17.141 "utilization": 1.0 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 3, 00:51:17.141 "state": "OPEN", 00:51:17.141 "utilization": 0.001953125 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "id": 4, 00:51:17.141 "state": "OPEN", 00:51:17.141 "utilization": 0.0 00:51:17.141 } 00:51:17.141 ], 00:51:17.141 "read-only": true 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "name": "verbose_mode", 00:51:17.141 "value": true, 00:51:17.141 "unit": "", 00:51:17.141 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:51:17.141 }, 00:51:17.141 { 00:51:17.141 "name": "prep_upgrade_on_shutdown", 00:51:17.141 "value": false, 00:51:17.141 "unit": "", 00:51:17.141 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:51:17.141 } 00:51:17.141 ] 00:51:17.141 } 00:51:17.141 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:51:17.400 [2024-11-05 16:16:38.565367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.400 [2024-11-05 16:16:38.565499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:17.400 [2024-11-05 16:16:38.565554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:51:17.400 [2024-11-05 16:16:38.565573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.400 [2024-11-05 16:16:38.565606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.400 [2024-11-05 16:16:38.565655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:17.400 [2024-11-05 16:16:38.565673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:17.400 [2024-11-05 16:16:38.565687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.400 [2024-11-05 16:16:38.565747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.400 [2024-11-05 16:16:38.565767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:17.400 [2024-11-05 16:16:38.565783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:17.400 [2024-11-05 16:16:38.565798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.400 [2024-11-05 16:16:38.565889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.507 ms, result 0 00:51:17.400 true 00:51:17.400 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:51:17.400 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:17.400 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:51:17.697 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:51:17.697 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:51:17.697 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:51:17.697 [2024-11-05 16:16:38.981751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.697 [2024-11-05 16:16:38.981891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:17.697 [2024-11-05 16:16:38.981943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:51:17.697 [2024-11-05 16:16:38.981962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.697 [2024-11-05 16:16:38.981994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.697 [2024-11-05 16:16:38.982011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:17.697 [2024-11-05 16:16:38.982026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:17.697 [2024-11-05 16:16:38.982040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.697 [2024-11-05 16:16:38.982064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:17.697 [2024-11-05 16:16:38.982080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:17.697 [2024-11-05 16:16:38.982096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:17.697 [2024-11-05 16:16:38.982133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:17.697 [2024-11-05 16:16:38.982195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.440 ms, result 0 00:51:17.697 true 00:51:17.697 16:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:17.956 { 00:51:17.956 "name": "ftl", 00:51:17.956 "properties": [ 00:51:17.956 { 00:51:17.956 "name": "superblock_version", 00:51:17.956 "value": 5, 00:51:17.956 "read-only": true 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "name": "base_device", 00:51:17.956 "bands": [ 00:51:17.956 { 00:51:17.956 "id": 0, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 1, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 2, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 3, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 4, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 5, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 6, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 7, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 8, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 9, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 10, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 11, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 12, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 13, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 14, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 15, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 16, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 17, 00:51:17.956 "state": "FREE", 00:51:17.956 "validity": 0.0 00:51:17.956 } 00:51:17.956 ], 00:51:17.956 "read-only": true 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "name": "cache_device", 00:51:17.956 "type": "bdev", 00:51:17.956 "chunks": [ 00:51:17.956 { 00:51:17.956 "id": 0, 00:51:17.956 "state": "INACTIVE", 00:51:17.956 "utilization": 0.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 1, 00:51:17.956 "state": "CLOSED", 00:51:17.956 "utilization": 1.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 2, 00:51:17.956 "state": "CLOSED", 00:51:17.956 "utilization": 1.0 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 3, 00:51:17.956 "state": "OPEN", 00:51:17.956 "utilization": 0.001953125 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "id": 4, 00:51:17.956 "state": "OPEN", 00:51:17.956 "utilization": 0.0 00:51:17.956 } 00:51:17.956 ], 00:51:17.956 "read-only": true 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "name": "verbose_mode", 00:51:17.956 "value": true, 00:51:17.956 "unit": "", 00:51:17.956 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:51:17.956 }, 00:51:17.956 { 00:51:17.956 "name": "prep_upgrade_on_shutdown", 00:51:17.957 "value": true, 00:51:17.957 "unit": "", 00:51:17.957 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:51:17.957 } 00:51:17.957 ] 00:51:17.957 } 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80503 ]] 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80503 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80503 ']' 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80503 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80503 00:51:17.957 killing process with pid 80503 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80503' 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80503 00:51:17.957 16:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80503 00:51:18.522 [2024-11-05 16:16:39.793560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:51:18.522 [2024-11-05 16:16:39.804055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:18.522 [2024-11-05 16:16:39.804088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:51:18.522 [2024-11-05 16:16:39.804098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:18.522 [2024-11-05 16:16:39.804105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:18.522 [2024-11-05 16:16:39.804122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:51:18.522 [2024-11-05 16:16:39.806210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:18.522 [2024-11-05 16:16:39.806246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:51:18.522 [2024-11-05 16:16:39.806253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.078 ms 00:51:18.522 [2024-11-05 16:16:39.806260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.408318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.408388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:51:28.540 [2024-11-05 16:16:48.408402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8602.009 ms 00:51:28.540 [2024-11-05 16:16:48.408411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.409538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.409701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:51:28.540 [2024-11-05 16:16:48.409716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.107 ms 00:51:28.540 [2024-11-05 16:16:48.409724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.410875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.410892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:51:28.540 [2024-11-05 16:16:48.410903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.114 ms 00:51:28.540 [2024-11-05 16:16:48.410912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.421123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.421156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:51:28.540 [2024-11-05 16:16:48.421166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.163 ms 00:51:28.540 [2024-11-05 16:16:48.421174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.427328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.427360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:51:28.540 [2024-11-05 16:16:48.427371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.121 ms 00:51:28.540 [2024-11-05 16:16:48.427378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.427456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.427466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:51:28.540 [2024-11-05 16:16:48.427474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:51:28.540 [2024-11-05 16:16:48.427486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.540 [2024-11-05 16:16:48.436506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.540 [2024-11-05 16:16:48.436538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:51:28.540 [2024-11-05 16:16:48.436549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.004 ms 00:51:28.540 [2024-11-05 16:16:48.436557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.445659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.541 [2024-11-05 16:16:48.445689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:51:28.541 [2024-11-05 16:16:48.445698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.072 ms 00:51:28.541 [2024-11-05 16:16:48.445706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.455089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.541 [2024-11-05 16:16:48.455212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:51:28.541 [2024-11-05 16:16:48.455226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.340 ms 00:51:28.541 [2024-11-05 16:16:48.455234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.465050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.541 [2024-11-05 16:16:48.465079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:51:28.541 [2024-11-05 16:16:48.465088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.746 ms 00:51:28.541 [2024-11-05 16:16:48.465095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.465123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:51:28.541 [2024-11-05 16:16:48.465137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:51:28.541 [2024-11-05 16:16:48.465147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:51:28.541 [2024-11-05 16:16:48.465164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:51:28.541 [2024-11-05 16:16:48.465172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:28.541 [2024-11-05 16:16:48.465284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:51:28.541 [2024-11-05 16:16:48.465292] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8c11d26c-8cb0-4981-b3d3-d10ebe806ca0 00:51:28.541 [2024-11-05 16:16:48.465299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:51:28.541 [2024-11-05 16:16:48.465306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:51:28.541 [2024-11-05 16:16:48.465313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:51:28.541 [2024-11-05 16:16:48.465320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:51:28.541 [2024-11-05 16:16:48.465326] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:51:28.541 [2024-11-05 16:16:48.465335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:51:28.541 [2024-11-05 16:16:48.465343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:51:28.541 [2024-11-05 16:16:48.465349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:51:28.541 [2024-11-05 16:16:48.465356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:51:28.541 [2024-11-05 16:16:48.465364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.541 [2024-11-05 16:16:48.465374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:51:28.541 [2024-11-05 16:16:48.465385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.242 ms 00:51:28.541 [2024-11-05 16:16:48.465393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.477753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.541 [2024-11-05 16:16:48.477783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:51:28.541 [2024-11-05 16:16:48.477794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.345 ms 00:51:28.541 [2024-11-05 16:16:48.477803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.478155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:28.541 [2024-11-05 16:16:48.478164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:51:28.541 [2024-11-05 16:16:48.478172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:51:28.541 [2024-11-05 16:16:48.478180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.519743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.519783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:51:28.541 [2024-11-05 16:16:48.519794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.519803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.519836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.519845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:51:28.541 [2024-11-05 16:16:48.519852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.519859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.519946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.519958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:51:28.541 [2024-11-05 16:16:48.519966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.519973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.519991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.519999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:51:28.541 [2024-11-05 16:16:48.520007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.520014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.595672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.595715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:51:28.541 [2024-11-05 16:16:48.595727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.595758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:51:28.541 [2024-11-05 16:16:48.659486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.659494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:51:28.541 [2024-11-05 16:16:48.659573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.659581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:51:28.541 [2024-11-05 16:16:48.659656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.659663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:51:28.541 [2024-11-05 16:16:48.659790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.659798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:51:28.541 [2024-11-05 16:16:48.659846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.659853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:51:28.541 [2024-11-05 16:16:48.659903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.541 [2024-11-05 16:16:48.659911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.541 [2024-11-05 16:16:48.659955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:51:28.541 [2024-11-05 16:16:48.659965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:51:28.542 [2024-11-05 16:16:48.659975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:51:28.542 [2024-11-05 16:16:48.659983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:28.542 [2024-11-05 16:16:48.660092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8855.981 ms, result 0 00:51:31.847 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81056 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81056 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81056 ']' 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:31.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:51:31.848 16:16:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:31.848 [2024-11-05 16:16:52.608929] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:31.848 [2024-11-05 16:16:52.609056] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81056 ] 00:51:31.848 [2024-11-05 16:16:52.766382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:31.848 [2024-11-05 16:16:52.867750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:32.420 [2024-11-05 16:16:53.548126] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:51:32.420 [2024-11-05 16:16:53.548197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:51:32.420 [2024-11-05 16:16:53.700324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.700371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:51:32.420 [2024-11-05 16:16:53.700384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:51:32.420 [2024-11-05 16:16:53.700392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.700442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.700453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:51:32.420 [2024-11-05 16:16:53.700461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:51:32.420 [2024-11-05 16:16:53.700468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.700489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:51:32.420 [2024-11-05 16:16:53.701218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:51:32.420 [2024-11-05 16:16:53.701235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.701242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:51:32.420 [2024-11-05 16:16:53.701250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.753 ms 00:51:32.420 [2024-11-05 16:16:53.701257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.702375] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:51:32.420 [2024-11-05 16:16:53.715533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.715565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:51:32.420 [2024-11-05 16:16:53.715582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.160 ms 00:51:32.420 [2024-11-05 16:16:53.715589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.715642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.715651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:51:32.420 [2024-11-05 16:16:53.715659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:51:32.420 [2024-11-05 16:16:53.715666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.720437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.720469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:51:32.420 [2024-11-05 16:16:53.720479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.703 ms 00:51:32.420 [2024-11-05 16:16:53.720488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.720544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.720553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:51:32.420 [2024-11-05 16:16:53.720561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:51:32.420 [2024-11-05 16:16:53.720568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.720602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.720610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:51:32.420 [2024-11-05 16:16:53.720621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:51:32.420 [2024-11-05 16:16:53.720628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.720647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:51:32.420 [2024-11-05 16:16:53.724098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.724223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:51:32.420 [2024-11-05 16:16:53.724237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.455 ms 00:51:32.420 [2024-11-05 16:16:53.724249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.724278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.724286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:51:32.420 [2024-11-05 16:16:53.724294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:51:32.420 [2024-11-05 16:16:53.724301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.724321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:51:32.420 [2024-11-05 16:16:53.724338] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:51:32.420 [2024-11-05 16:16:53.724375] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:51:32.420 [2024-11-05 16:16:53.724389] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:51:32.420 [2024-11-05 16:16:53.724492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:51:32.420 [2024-11-05 16:16:53.724502] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:51:32.420 [2024-11-05 16:16:53.724512] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:51:32.420 [2024-11-05 16:16:53.724522] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:51:32.420 [2024-11-05 16:16:53.724530] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:51:32.420 [2024-11-05 16:16:53.724540] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:51:32.420 [2024-11-05 16:16:53.724548] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:51:32.420 [2024-11-05 16:16:53.724555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:51:32.420 [2024-11-05 16:16:53.724562] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:51:32.420 [2024-11-05 16:16:53.724569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.724577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:51:32.420 [2024-11-05 16:16:53.724584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:51:32.420 [2024-11-05 16:16:53.724591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.724675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.420 [2024-11-05 16:16:53.724682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:51:32.420 [2024-11-05 16:16:53.724690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:51:32.420 [2024-11-05 16:16:53.724698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.420 [2024-11-05 16:16:53.724830] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:51:32.420 [2024-11-05 16:16:53.724841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:51:32.420 [2024-11-05 16:16:53.724850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:32.420 [2024-11-05 16:16:53.724857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.420 [2024-11-05 16:16:53.724865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:51:32.420 [2024-11-05 16:16:53.724871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:51:32.420 [2024-11-05 16:16:53.724878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:51:32.420 [2024-11-05 16:16:53.724886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:51:32.420 [2024-11-05 16:16:53.724894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:51:32.420 [2024-11-05 16:16:53.724900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.420 [2024-11-05 16:16:53.724907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:51:32.420 [2024-11-05 16:16:53.724913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:51:32.420 [2024-11-05 16:16:53.724919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.420 [2024-11-05 16:16:53.724926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:51:32.420 [2024-11-05 16:16:53.724932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:51:32.420 [2024-11-05 16:16:53.724939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.420 [2024-11-05 16:16:53.724945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:51:32.420 [2024-11-05 16:16:53.724952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:51:32.420 [2024-11-05 16:16:53.724958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.420 [2024-11-05 16:16:53.724965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:51:32.420 [2024-11-05 16:16:53.724971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:51:32.420 [2024-11-05 16:16:53.724977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:32.420 [2024-11-05 16:16:53.724984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:51:32.420 [2024-11-05 16:16:53.724990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:51:32.420 [2024-11-05 16:16:53.724997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:32.420 [2024-11-05 16:16:53.725010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:51:32.420 [2024-11-05 16:16:53.725019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:51:32.421 [2024-11-05 16:16:53.725026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:32.421 [2024-11-05 16:16:53.725032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:51:32.421 [2024-11-05 16:16:53.725039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:51:32.421 [2024-11-05 16:16:53.725046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:32.421 [2024-11-05 16:16:53.725052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:51:32.421 [2024-11-05 16:16:53.725058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:51:32.421 [2024-11-05 16:16:53.725064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.421 [2024-11-05 16:16:53.725071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:51:32.421 [2024-11-05 16:16:53.725077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:51:32.421 [2024-11-05 16:16:53.725083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.421 [2024-11-05 16:16:53.725090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:51:32.421 [2024-11-05 16:16:53.725096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:51:32.421 [2024-11-05 16:16:53.725103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.421 [2024-11-05 16:16:53.725109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:51:32.421 [2024-11-05 16:16:53.725115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:51:32.421 [2024-11-05 16:16:53.725122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.421 [2024-11-05 16:16:53.725128] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:51:32.421 [2024-11-05 16:16:53.725135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:51:32.421 [2024-11-05 16:16:53.725142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:32.421 [2024-11-05 16:16:53.725149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:32.421 [2024-11-05 16:16:53.725158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:51:32.421 [2024-11-05 16:16:53.725165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:51:32.421 [2024-11-05 16:16:53.725172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:51:32.421 [2024-11-05 16:16:53.725178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:51:32.421 [2024-11-05 16:16:53.725184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:51:32.421 [2024-11-05 16:16:53.725191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:51:32.421 [2024-11-05 16:16:53.725199] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:51:32.421 [2024-11-05 16:16:53.725207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:51:32.421 [2024-11-05 16:16:53.725223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:51:32.421 [2024-11-05 16:16:53.725243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:51:32.421 [2024-11-05 16:16:53.725250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:51:32.421 [2024-11-05 16:16:53.725256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:51:32.421 [2024-11-05 16:16:53.725263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:51:32.421 [2024-11-05 16:16:53.725311] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:51:32.421 [2024-11-05 16:16:53.725320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:32.421 [2024-11-05 16:16:53.725337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:51:32.421 [2024-11-05 16:16:53.725344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:51:32.421 [2024-11-05 16:16:53.725351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:51:32.421 [2024-11-05 16:16:53.725359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:32.421 [2024-11-05 16:16:53.725366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:51:32.421 [2024-11-05 16:16:53.725373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:51:32.421 [2024-11-05 16:16:53.725380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:32.421 [2024-11-05 16:16:53.725418] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:51:32.421 [2024-11-05 16:16:53.725428] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:51:36.625 [2024-11-05 16:16:57.098853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.098914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:51:36.625 [2024-11-05 16:16:57.098930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3373.420 ms 00:51:36.625 [2024-11-05 16:16:57.098938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.124397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.124442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:51:36.625 [2024-11-05 16:16:57.124455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.256 ms 00:51:36.625 [2024-11-05 16:16:57.124464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.124555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.124570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:51:36.625 [2024-11-05 16:16:57.124578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:51:36.625 [2024-11-05 16:16:57.124586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.154803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.154972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:51:36.625 [2024-11-05 16:16:57.154991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.178 ms 00:51:36.625 [2024-11-05 16:16:57.155005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.155035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.155043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:51:36.625 [2024-11-05 16:16:57.155052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:36.625 [2024-11-05 16:16:57.155059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.155413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.155429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:51:36.625 [2024-11-05 16:16:57.155439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:51:36.625 [2024-11-05 16:16:57.155447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.155488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.155496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:51:36.625 [2024-11-05 16:16:57.155504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:51:36.625 [2024-11-05 16:16:57.155511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.169554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.169584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:51:36.625 [2024-11-05 16:16:57.169594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.021 ms 00:51:36.625 [2024-11-05 16:16:57.169601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.182105] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:51:36.625 [2024-11-05 16:16:57.182139] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:51:36.625 [2024-11-05 16:16:57.182151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.182159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:51:36.625 [2024-11-05 16:16:57.182167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.455 ms 00:51:36.625 [2024-11-05 16:16:57.182174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.195767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.195797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:51:36.625 [2024-11-05 16:16:57.195807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.557 ms 00:51:36.625 [2024-11-05 16:16:57.195815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.207484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.207524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:51:36.625 [2024-11-05 16:16:57.207533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.631 ms 00:51:36.625 [2024-11-05 16:16:57.207540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.218489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.218518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:51:36.625 [2024-11-05 16:16:57.218528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.916 ms 00:51:36.625 [2024-11-05 16:16:57.218535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.219135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.219159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:51:36.625 [2024-11-05 16:16:57.219169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:51:36.625 [2024-11-05 16:16:57.219176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.286479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.286671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:51:36.625 [2024-11-05 16:16:57.286692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 67.283 ms 00:51:36.625 [2024-11-05 16:16:57.286701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.296966] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:51:36.625 [2024-11-05 16:16:57.297766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.297790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:51:36.625 [2024-11-05 16:16:57.297801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.004 ms 00:51:36.625 [2024-11-05 16:16:57.297808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.297882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.297896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:51:36.625 [2024-11-05 16:16:57.297904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:51:36.625 [2024-11-05 16:16:57.297912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.297966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.297976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:51:36.625 [2024-11-05 16:16:57.297985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:51:36.625 [2024-11-05 16:16:57.297992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.298013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.298020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:51:36.625 [2024-11-05 16:16:57.298028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:51:36.625 [2024-11-05 16:16:57.298038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.298069] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:51:36.625 [2024-11-05 16:16:57.298078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.298085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:51:36.625 [2024-11-05 16:16:57.298093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:51:36.625 [2024-11-05 16:16:57.298100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.321267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.321305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:51:36.625 [2024-11-05 16:16:57.321316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.149 ms 00:51:36.625 [2024-11-05 16:16:57.321324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.321402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.625 [2024-11-05 16:16:57.321412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:51:36.625 [2024-11-05 16:16:57.321420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:51:36.625 [2024-11-05 16:16:57.321427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.625 [2024-11-05 16:16:57.322340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3621.583 ms, result 0 00:51:36.625 [2024-11-05 16:16:57.337632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:36.625 [2024-11-05 16:16:57.353624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:51:36.625 [2024-11-05 16:16:57.361758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:51:36.625 16:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:51:36.625 16:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:51:36.625 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:36.625 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:51:36.626 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:51:36.626 [2024-11-05 16:16:57.581853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.626 [2024-11-05 16:16:57.581902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:51:36.626 [2024-11-05 16:16:57.581916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:51:36.626 [2024-11-05 16:16:57.581927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.626 [2024-11-05 16:16:57.581950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.626 [2024-11-05 16:16:57.581958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:51:36.626 [2024-11-05 16:16:57.581966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:36.626 [2024-11-05 16:16:57.581974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.626 [2024-11-05 16:16:57.581992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:36.626 [2024-11-05 16:16:57.582000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:51:36.626 [2024-11-05 16:16:57.582008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:36.626 [2024-11-05 16:16:57.582016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:36.626 [2024-11-05 16:16:57.582073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.209 ms, result 0 00:51:36.626 true 00:51:36.626 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:36.626 { 00:51:36.626 "name": "ftl", 00:51:36.626 "properties": [ 00:51:36.626 { 00:51:36.626 "name": "superblock_version", 00:51:36.626 "value": 5, 00:51:36.626 "read-only": true 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "name": "base_device", 00:51:36.626 "bands": [ 00:51:36.626 { 00:51:36.626 "id": 0, 00:51:36.626 "state": "CLOSED", 00:51:36.626 "validity": 1.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 1, 00:51:36.626 "state": "CLOSED", 00:51:36.626 "validity": 1.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 2, 00:51:36.626 "state": "CLOSED", 00:51:36.626 "validity": 0.007843137254901933 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 3, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 4, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 5, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 6, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 7, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 8, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 9, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 10, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 11, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 12, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 13, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 14, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 15, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 16, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 17, 00:51:36.626 "state": "FREE", 00:51:36.626 "validity": 0.0 00:51:36.626 } 00:51:36.626 ], 00:51:36.626 "read-only": true 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "name": "cache_device", 00:51:36.626 "type": "bdev", 00:51:36.626 "chunks": [ 00:51:36.626 { 00:51:36.626 "id": 0, 00:51:36.626 "state": "INACTIVE", 00:51:36.626 "utilization": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 1, 00:51:36.626 "state": "OPEN", 00:51:36.626 "utilization": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 2, 00:51:36.626 "state": "OPEN", 00:51:36.626 "utilization": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 3, 00:51:36.626 "state": "FREE", 00:51:36.626 "utilization": 0.0 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "id": 4, 00:51:36.626 "state": "FREE", 00:51:36.626 "utilization": 0.0 00:51:36.626 } 00:51:36.626 ], 00:51:36.626 "read-only": true 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "name": "verbose_mode", 00:51:36.626 "value": true, 00:51:36.626 "unit": "", 00:51:36.626 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:51:36.626 }, 00:51:36.626 { 00:51:36.626 "name": "prep_upgrade_on_shutdown", 00:51:36.626 "value": false, 00:51:36.626 "unit": "", 00:51:36.626 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:51:36.626 } 00:51:36.626 ] 00:51:36.626 } 00:51:36.626 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:51:36.626 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:36.626 16:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:51:36.886 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:51:36.886 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:51:36.886 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:51:36.887 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:51:36.887 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:51:37.148 Validate MD5 checksum, iteration 1 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:51:37.148 16:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:37.148 [2024-11-05 16:16:58.382534] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:37.148 [2024-11-05 16:16:58.382848] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81137 ] 00:51:37.409 [2024-11-05 16:16:58.543034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:37.409 [2024-11-05 16:16:58.674357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:39.335  [2024-11-05T16:17:00.959Z] Copying: 660/1024 [MB] (660 MBps) [2024-11-05T16:17:02.345Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:51:40.983 00:51:40.983 16:17:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:51:40.983 16:17:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=07f5a084729493410d3927ea70f8b94e 00:51:43.532 Validate MD5 checksum, iteration 2 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 07f5a084729493410d3927ea70f8b94e != \0\7\f\5\a\0\8\4\7\2\9\4\9\3\4\1\0\d\3\9\2\7\e\a\7\0\f\8\b\9\4\e ]] 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:51:43.532 16:17:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:51:43.532 [2024-11-05 16:17:04.345267] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:43.532 [2024-11-05 16:17:04.345594] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81204 ] 00:51:43.532 [2024-11-05 16:17:04.509897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:43.532 [2024-11-05 16:17:04.636441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:44.918  [2024-11-05T16:17:07.221Z] Copying: 584/1024 [MB] (584 MBps) [2024-11-05T16:17:10.522Z] Copying: 1024/1024 [MB] (average 567 MBps) 00:51:49.160 00:51:49.160 16:17:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:51:49.160 16:17:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b7884b304b82c18e283a88db93c3f9b8 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b7884b304b82c18e283a88db93c3f9b8 != \b\7\8\8\4\b\3\0\4\b\8\2\c\1\8\e\2\8\3\a\8\8\d\b\9\3\c\3\f\9\b\8 ]] 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81056 ]] 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81056 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81294 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:51:51.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81294 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81294 ']' 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:51:51.690 16:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:51.690 [2024-11-05 16:17:12.559296] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:51.690 [2024-11-05 16:17:12.559413] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81294 ] 00:51:51.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81056 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:51:51.690 [2024-11-05 16:17:12.716517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:51.690 [2024-11-05 16:17:12.796813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:52.258 [2024-11-05 16:17:13.371294] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:51:52.258 [2024-11-05 16:17:13.371347] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:51:52.258 [2024-11-05 16:17:13.514472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.514607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:51:52.258 [2024-11-05 16:17:13.514623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:52.258 [2024-11-05 16:17:13.514630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.514677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.514685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:51:52.258 [2024-11-05 16:17:13.514691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:51:52.258 [2024-11-05 16:17:13.514697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.514717] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:51:52.258 [2024-11-05 16:17:13.515247] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:51:52.258 [2024-11-05 16:17:13.515260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.515266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:51:52.258 [2024-11-05 16:17:13.515273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:51:52.258 [2024-11-05 16:17:13.515279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.515510] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:51:52.258 [2024-11-05 16:17:13.528026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.528054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:51:52.258 [2024-11-05 16:17:13.528064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.518 ms 00:51:52.258 [2024-11-05 16:17:13.528072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.534893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.534918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:51:52.258 [2024-11-05 16:17:13.534928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:51:52.258 [2024-11-05 16:17:13.534933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.535177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.535191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:51:52.258 [2024-11-05 16:17:13.535198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:51:52.258 [2024-11-05 16:17:13.535204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.535240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.535249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:51:52.258 [2024-11-05 16:17:13.535255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:51:52.258 [2024-11-05 16:17:13.535261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.535279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.535286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:51:52.258 [2024-11-05 16:17:13.535292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:51:52.258 [2024-11-05 16:17:13.535298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.535313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:51:52.258 [2024-11-05 16:17:13.537545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.537643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:51:52.258 [2024-11-05 16:17:13.537655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.236 ms 00:51:52.258 [2024-11-05 16:17:13.537662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.537684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.537691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:51:52.258 [2024-11-05 16:17:13.537697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:52.258 [2024-11-05 16:17:13.537702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.537719] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:51:52.258 [2024-11-05 16:17:13.537742] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:51:52.258 [2024-11-05 16:17:13.537770] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:51:52.258 [2024-11-05 16:17:13.537783] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:51:52.258 [2024-11-05 16:17:13.537865] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:51:52.258 [2024-11-05 16:17:13.537873] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:51:52.258 [2024-11-05 16:17:13.537881] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:51:52.258 [2024-11-05 16:17:13.537889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:51:52.258 [2024-11-05 16:17:13.537896] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:51:52.258 [2024-11-05 16:17:13.537902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:51:52.258 [2024-11-05 16:17:13.537908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:51:52.258 [2024-11-05 16:17:13.537914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:51:52.258 [2024-11-05 16:17:13.537920] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:51:52.258 [2024-11-05 16:17:13.537925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.537933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:51:52.258 [2024-11-05 16:17:13.537939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.207 ms 00:51:52.258 [2024-11-05 16:17:13.537944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.538010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.258 [2024-11-05 16:17:13.538016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:51:52.258 [2024-11-05 16:17:13.538022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:51:52.258 [2024-11-05 16:17:13.538027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.258 [2024-11-05 16:17:13.538105] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:51:52.258 [2024-11-05 16:17:13.538112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:51:52.258 [2024-11-05 16:17:13.538120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:52.258 [2024-11-05 16:17:13.538126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.258 [2024-11-05 16:17:13.538135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:51:52.258 [2024-11-05 16:17:13.538141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:51:52.258 [2024-11-05 16:17:13.538146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:51:52.258 [2024-11-05 16:17:13.538151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:51:52.258 [2024-11-05 16:17:13.538157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:51:52.258 [2024-11-05 16:17:13.538163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.258 [2024-11-05 16:17:13.538168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:51:52.258 [2024-11-05 16:17:13.538173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:51:52.258 [2024-11-05 16:17:13.538178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.258 [2024-11-05 16:17:13.538184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:51:52.258 [2024-11-05 16:17:13.538189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:51:52.258 [2024-11-05 16:17:13.538194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.258 [2024-11-05 16:17:13.538200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:51:52.258 [2024-11-05 16:17:13.538205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:51:52.258 [2024-11-05 16:17:13.538210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.258 [2024-11-05 16:17:13.538223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:51:52.259 [2024-11-05 16:17:13.538228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:51:52.259 [2024-11-05 16:17:13.538233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:51:52.259 [2024-11-05 16:17:13.538249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:51:52.259 [2024-11-05 16:17:13.538254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:51:52.259 [2024-11-05 16:17:13.538265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:51:52.259 [2024-11-05 16:17:13.538270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:51:52.259 [2024-11-05 16:17:13.538281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:51:52.259 [2024-11-05 16:17:13.538286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:51:52.259 [2024-11-05 16:17:13.538296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:51:52.259 [2024-11-05 16:17:13.538301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.259 [2024-11-05 16:17:13.538306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:51:52.259 [2024-11-05 16:17:13.538312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.259 [2024-11-05 16:17:13.538325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:51:52.259 [2024-11-05 16:17:13.538330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:51:52.259 [2024-11-05 16:17:13.538335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.259 [2024-11-05 16:17:13.538340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:51:52.259 [2024-11-05 16:17:13.538345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:51:52.259 [2024-11-05 16:17:13.538350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.259 [2024-11-05 16:17:13.538355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:51:52.259 [2024-11-05 16:17:13.538362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:51:52.259 [2024-11-05 16:17:13.538367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:51:52.259 [2024-11-05 16:17:13.538379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:51:52.259 [2024-11-05 16:17:13.538384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:51:52.259 [2024-11-05 16:17:13.538389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:51:52.259 [2024-11-05 16:17:13.538395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:51:52.259 [2024-11-05 16:17:13.538399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:51:52.259 [2024-11-05 16:17:13.538405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:51:52.259 [2024-11-05 16:17:13.538411] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:51:52.259 [2024-11-05 16:17:13.538419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:51:52.259 [2024-11-05 16:17:13.538431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:51:52.259 [2024-11-05 16:17:13.538447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:51:52.259 [2024-11-05 16:17:13.538453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:51:52.259 [2024-11-05 16:17:13.538458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:51:52.259 [2024-11-05 16:17:13.538464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:51:52.259 [2024-11-05 16:17:13.538504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:51:52.259 [2024-11-05 16:17:13.538510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:52.259 [2024-11-05 16:17:13.538522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:51:52.259 [2024-11-05 16:17:13.538528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:51:52.259 [2024-11-05 16:17:13.538533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:51:52.259 [2024-11-05 16:17:13.538539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.538546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:51:52.259 [2024-11-05 16:17:13.538552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:51:52.259 [2024-11-05 16:17:13.538557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.558085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.558175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:51:52.259 [2024-11-05 16:17:13.558225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.489 ms 00:51:52.259 [2024-11-05 16:17:13.558244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.558314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.558351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:51:52.259 [2024-11-05 16:17:13.558369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:51:52.259 [2024-11-05 16:17:13.558385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.582632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.582720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:51:52.259 [2024-11-05 16:17:13.582775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.144 ms 00:51:52.259 [2024-11-05 16:17:13.582796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.582834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.582900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:51:52.259 [2024-11-05 16:17:13.582920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:51:52.259 [2024-11-05 16:17:13.582935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.583048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.583102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:51:52.259 [2024-11-05 16:17:13.583139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:51:52.259 [2024-11-05 16:17:13.583156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.583200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.583217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:51:52.259 [2024-11-05 16:17:13.583232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:51:52.259 [2024-11-05 16:17:13.583275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.594802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.594882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:51:52.259 [2024-11-05 16:17:13.594921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.499 ms 00:51:52.259 [2024-11-05 16:17:13.594938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.259 [2024-11-05 16:17:13.595026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.259 [2024-11-05 16:17:13.595047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:51:52.259 [2024-11-05 16:17:13.595063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:51:52.259 [2024-11-05 16:17:13.595102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.625104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.625273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:51:52.520 [2024-11-05 16:17:13.625358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.943 ms 00:51:52.520 [2024-11-05 16:17:13.625395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.633026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.633106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:51:52.520 [2024-11-05 16:17:13.633165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:51:52.520 [2024-11-05 16:17:13.633182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.675891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.676009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:51:52.520 [2024-11-05 16:17:13.676058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.651 ms 00:51:52.520 [2024-11-05 16:17:13.676076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.676185] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:51:52.520 [2024-11-05 16:17:13.676306] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:51:52.520 [2024-11-05 16:17:13.676572] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:51:52.520 [2024-11-05 16:17:13.676762] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:51:52.520 [2024-11-05 16:17:13.676814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.676830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:51:52.520 [2024-11-05 16:17:13.676866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.701 ms 00:51:52.520 [2024-11-05 16:17:13.676882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.676941] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:51:52.520 [2024-11-05 16:17:13.676997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.677017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:51:52.520 [2024-11-05 16:17:13.677033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:51:52.520 [2024-11-05 16:17:13.677064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.687797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.687881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:51:52.520 [2024-11-05 16:17:13.687922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.703 ms 00:51:52.520 [2024-11-05 16:17:13.687940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.694287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.694358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:51:52.520 [2024-11-05 16:17:13.694394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:51:52.520 [2024-11-05 16:17:13.694411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:52.520 [2024-11-05 16:17:13.694484] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:51:52.520 [2024-11-05 16:17:13.694609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:52.520 [2024-11-05 16:17:13.694632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:51:52.520 [2024-11-05 16:17:13.694702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:51:52.520 [2024-11-05 16:17:13.694721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.097 [2024-11-05 16:17:14.224806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.097 [2024-11-05 16:17:14.225031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:51:53.097 [2024-11-05 16:17:14.225102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 529.420 ms 00:51:53.097 [2024-11-05 16:17:14.225131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.097 [2024-11-05 16:17:14.229687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.097 [2024-11-05 16:17:14.229827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:51:53.097 [2024-11-05 16:17:14.229885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.520 ms 00:51:53.097 [2024-11-05 16:17:14.229912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.097 [2024-11-05 16:17:14.231323] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:51:53.097 [2024-11-05 16:17:14.231761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.097 [2024-11-05 16:17:14.231940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:51:53.097 [2024-11-05 16:17:14.231979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.426 ms 00:51:53.097 [2024-11-05 16:17:14.232003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.097 [2024-11-05 16:17:14.232104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.097 [2024-11-05 16:17:14.232133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:51:53.097 [2024-11-05 16:17:14.232158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:51:53.097 [2024-11-05 16:17:14.232179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.097 [2024-11-05 16:17:14.232308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 537.779 ms, result 0 00:51:53.097 [2024-11-05 16:17:14.232426] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:51:53.097 [2024-11-05 16:17:14.232581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.097 [2024-11-05 16:17:14.232607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:51:53.097 [2024-11-05 16:17:14.232630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.158 ms 00:51:53.097 [2024-11-05 16:17:14.232651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.678 [2024-11-05 16:17:15.018767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.678 [2024-11-05 16:17:15.019061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:51:53.678 [2024-11-05 16:17:15.019089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 783.495 ms 00:51:53.678 [2024-11-05 16:17:15.019099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.678 [2024-11-05 16:17:15.024066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.678 [2024-11-05 16:17:15.024117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:51:53.678 [2024-11-05 16:17:15.024130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.798 ms 00:51:53.678 [2024-11-05 16:17:15.024139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.678 [2024-11-05 16:17:15.025222] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:51:53.678 [2024-11-05 16:17:15.025259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.678 [2024-11-05 16:17:15.025270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:51:53.678 [2024-11-05 16:17:15.025281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.085 ms 00:51:53.678 [2024-11-05 16:17:15.025291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.678 [2024-11-05 16:17:15.025330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.678 [2024-11-05 16:17:15.025341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:51:53.678 [2024-11-05 16:17:15.025351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:51:53.678 [2024-11-05 16:17:15.025360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.678 [2024-11-05 16:17:15.025402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 792.999 ms, result 0 00:51:53.678 [2024-11-05 16:17:15.025453] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:51:53.679 [2024-11-05 16:17:15.025465] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:51:53.679 [2024-11-05 16:17:15.025477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.679 [2024-11-05 16:17:15.025487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:51:53.679 [2024-11-05 16:17:15.025497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1331.003 ms 00:51:53.679 [2024-11-05 16:17:15.025506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.679 [2024-11-05 16:17:15.025542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.679 [2024-11-05 16:17:15.025552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:51:53.679 [2024-11-05 16:17:15.025567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:51:53.679 [2024-11-05 16:17:15.025577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.679 [2024-11-05 16:17:15.038609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:51:53.679 [2024-11-05 16:17:15.038917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.679 [2024-11-05 16:17:15.038936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:51:53.679 [2024-11-05 16:17:15.038948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.322 ms 00:51:53.679 [2024-11-05 16:17:15.038956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.679 [2024-11-05 16:17:15.039673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.679 [2024-11-05 16:17:15.039696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:51:53.679 [2024-11-05 16:17:15.039710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:51:53.679 [2024-11-05 16:17:15.039718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.940 [2024-11-05 16:17:15.041986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.940 [2024-11-05 16:17:15.042133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:51:53.940 [2024-11-05 16:17:15.042150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.229 ms 00:51:53.940 [2024-11-05 16:17:15.042159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.940 [2024-11-05 16:17:15.042232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.940 [2024-11-05 16:17:15.042243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:51:53.940 [2024-11-05 16:17:15.042251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:51:53.940 [2024-11-05 16:17:15.042263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.940 [2024-11-05 16:17:15.042376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.940 [2024-11-05 16:17:15.042386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:51:53.940 [2024-11-05 16:17:15.042394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:51:53.940 [2024-11-05 16:17:15.042402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.940 [2024-11-05 16:17:15.042424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.940 [2024-11-05 16:17:15.042432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:51:53.940 [2024-11-05 16:17:15.042441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:51:53.940 [2024-11-05 16:17:15.042448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.940 [2024-11-05 16:17:15.042482] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:51:53.940 [2024-11-05 16:17:15.042494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.940 [2024-11-05 16:17:15.042503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:51:53.940 [2024-11-05 16:17:15.042511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:51:53.940 [2024-11-05 16:17:15.042519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.940 [2024-11-05 16:17:15.042572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:51:53.941 [2024-11-05 16:17:15.042581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:51:53.941 [2024-11-05 16:17:15.042590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:51:53.941 [2024-11-05 16:17:15.042598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:51:53.941 [2024-11-05 16:17:15.043792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1528.752 ms, result 0 00:51:53.941 [2024-11-05 16:17:15.059425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:53.941 [2024-11-05 16:17:15.075411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:51:53.941 [2024-11-05 16:17:15.084487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:51:53.941 Validate MD5 checksum, iteration 1 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:51:53.941 16:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:51:53.941 [2024-11-05 16:17:15.295161] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:51:53.941 [2024-11-05 16:17:15.295447] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81323 ] 00:51:54.202 [2024-11-05 16:17:15.460502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:54.464 [2024-11-05 16:17:15.588189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:51:55.853  [2024-11-05T16:17:18.159Z] Copying: 539/1024 [MB] (539 MBps) [2024-11-05T16:17:21.464Z] Copying: 1024/1024 [MB] (average 538 MBps) 00:52:00.102 00:52:00.102 16:17:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:52:00.102 16:17:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:02.019 Validate MD5 checksum, iteration 2 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=07f5a084729493410d3927ea70f8b94e 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 07f5a084729493410d3927ea70f8b94e != \0\7\f\5\a\0\8\4\7\2\9\4\9\3\4\1\0\d\3\9\2\7\e\a\7\0\f\8\b\9\4\e ]] 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:52:02.019 16:17:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:52:02.019 [2024-11-05 16:17:23.363512] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:52:02.019 [2024-11-05 16:17:23.363661] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81411 ] 00:52:02.280 [2024-11-05 16:17:23.526853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:02.542 [2024-11-05 16:17:23.660130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:03.927  [2024-11-05T16:17:26.234Z] Copying: 564/1024 [MB] (564 MBps) [2024-11-05T16:17:27.621Z] Copying: 1024/1024 [MB] (average 539 MBps) 00:52:06.259 00:52:06.259 16:17:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:52:06.259 16:17:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:08.204 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:52:08.204 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b7884b304b82c18e283a88db93c3f9b8 00:52:08.204 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b7884b304b82c18e283a88db93c3f9b8 != \b\7\8\8\4\b\3\0\4\b\8\2\c\1\8\e\2\8\3\a\8\8\d\b\9\3\c\3\f\9\b\8 ]] 00:52:08.204 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:52:08.204 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:52:08.205 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:52:08.205 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:52:08.205 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:52:08.205 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81294 ]] 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81294 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81294 ']' 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81294 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81294 00:52:08.465 killing process with pid 81294 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81294' 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81294 00:52:08.465 16:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81294 00:52:09.411 [2024-11-05 16:17:30.547388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:52:09.411 [2024-11-05 16:17:30.563216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.563279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:52:09.411 [2024-11-05 16:17:30.563295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:52:09.411 [2024-11-05 16:17:30.563304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.563328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:52:09.411 [2024-11-05 16:17:30.566409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.566595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:52:09.411 [2024-11-05 16:17:30.566615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.064 ms 00:52:09.411 [2024-11-05 16:17:30.566630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.566883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.566894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:52:09.411 [2024-11-05 16:17:30.566904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:52:09.411 [2024-11-05 16:17:30.566912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.568790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.568826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:52:09.411 [2024-11-05 16:17:30.568837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.860 ms 00:52:09.411 [2024-11-05 16:17:30.568845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.570028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.570053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:52:09.411 [2024-11-05 16:17:30.570064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:52:09.411 [2024-11-05 16:17:30.570073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.581484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.581657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:52:09.411 [2024-11-05 16:17:30.581679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.351 ms 00:52:09.411 [2024-11-05 16:17:30.581695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.587678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.587726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:52:09.411 [2024-11-05 16:17:30.587747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.809 ms 00:52:09.411 [2024-11-05 16:17:30.587756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.587843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.587854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:52:09.411 [2024-11-05 16:17:30.587863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:52:09.411 [2024-11-05 16:17:30.587872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.598800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.598844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:52:09.411 [2024-11-05 16:17:30.598856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.902 ms 00:52:09.411 [2024-11-05 16:17:30.598865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.609694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.609740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:52:09.411 [2024-11-05 16:17:30.609752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.786 ms 00:52:09.411 [2024-11-05 16:17:30.609760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.619989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.620026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:52:09.411 [2024-11-05 16:17:30.620037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.186 ms 00:52:09.411 [2024-11-05 16:17:30.620045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.630209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.411 [2024-11-05 16:17:30.630259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:52:09.411 [2024-11-05 16:17:30.630271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.087 ms 00:52:09.411 [2024-11-05 16:17:30.630279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.411 [2024-11-05 16:17:30.630322] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:52:09.411 [2024-11-05 16:17:30.630339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:52:09.411 [2024-11-05 16:17:30.630351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:52:09.411 [2024-11-05 16:17:30.630361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:52:09.411 [2024-11-05 16:17:30.630370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:09.411 [2024-11-05 16:17:30.630490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:52:09.411 [2024-11-05 16:17:30.630498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8c11d26c-8cb0-4981-b3d3-d10ebe806ca0 00:52:09.411 [2024-11-05 16:17:30.630506] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:52:09.411 [2024-11-05 16:17:30.630514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:52:09.412 [2024-11-05 16:17:30.630523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:52:09.412 [2024-11-05 16:17:30.630532] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:52:09.412 [2024-11-05 16:17:30.630539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:52:09.412 [2024-11-05 16:17:30.630547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:52:09.412 [2024-11-05 16:17:30.630555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:52:09.412 [2024-11-05 16:17:30.630561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:52:09.412 [2024-11-05 16:17:30.630568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:52:09.412 [2024-11-05 16:17:30.630579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.412 [2024-11-05 16:17:30.630595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:52:09.412 [2024-11-05 16:17:30.630605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:52:09.412 [2024-11-05 16:17:30.630612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.412 [2024-11-05 16:17:30.644614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.412 [2024-11-05 16:17:30.644651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:52:09.412 [2024-11-05 16:17:30.644663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.982 ms 00:52:09.412 [2024-11-05 16:17:30.644671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.412 [2024-11-05 16:17:30.645089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:52:09.412 [2024-11-05 16:17:30.645099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:52:09.412 [2024-11-05 16:17:30.645109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:52:09.412 [2024-11-05 16:17:30.645117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.412 [2024-11-05 16:17:30.691473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.412 [2024-11-05 16:17:30.691526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:52:09.412 [2024-11-05 16:17:30.691540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.412 [2024-11-05 16:17:30.691550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.412 [2024-11-05 16:17:30.691614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.412 [2024-11-05 16:17:30.691625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:52:09.412 [2024-11-05 16:17:30.691634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.412 [2024-11-05 16:17:30.691643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.412 [2024-11-05 16:17:30.691782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.412 [2024-11-05 16:17:30.691795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:52:09.412 [2024-11-05 16:17:30.691806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.412 [2024-11-05 16:17:30.691815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.412 [2024-11-05 16:17:30.691836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.412 [2024-11-05 16:17:30.691849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:52:09.412 [2024-11-05 16:17:30.691858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.412 [2024-11-05 16:17:30.691865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.777484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.777541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:52:09.674 [2024-11-05 16:17:30.777554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.777563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:52:09.674 [2024-11-05 16:17:30.847396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:52:09.674 [2024-11-05 16:17:30.847519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:52:09.674 [2024-11-05 16:17:30.847614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:52:09.674 [2024-11-05 16:17:30.847775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:52:09.674 [2024-11-05 16:17:30.847838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:52:09.674 [2024-11-05 16:17:30.847910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.847965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:52:09.674 [2024-11-05 16:17:30.847975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:52:09.674 [2024-11-05 16:17:30.847987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:52:09.674 [2024-11-05 16:17:30.847995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:52:09.674 [2024-11-05 16:17:30.848128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 284.879 ms, result 0 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:52:10.620 Remove shared memory files 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81056 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:52:10.620 00:52:10.620 real 1m27.589s 00:52:10.620 user 2m1.719s 00:52:10.620 sys 0m19.714s 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:52:10.620 ************************************ 00:52:10.620 END TEST ftl_upgrade_shutdown 00:52:10.620 ************************************ 00:52:10.620 16:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@14 -- # killprocess 72181 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@952 -- # '[' -z 72181 ']' 00:52:10.620 Process with pid 72181 is not found 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@956 -- # kill -0 72181 00:52:10.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72181) - No such process 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72181 is not found' 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81534 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81534 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@833 -- # '[' -z 81534 ']' 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:10.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:52:10.620 16:17:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:52:10.620 16:17:31 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:10.620 [2024-11-05 16:17:31.929149] Starting SPDK v25.01-pre git sha1 eca0d2cd8 / DPDK 24.03.0 initialization... 00:52:10.620 [2024-11-05 16:17:31.929302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81534 ] 00:52:10.882 [2024-11-05 16:17:32.098101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:10.882 [2024-11-05 16:17:32.239366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:11.825 16:17:32 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:52:11.825 16:17:32 ftl -- common/autotest_common.sh@866 -- # return 0 00:52:11.825 16:17:32 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:52:12.085 nvme0n1 00:52:12.085 16:17:33 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:52:12.085 16:17:33 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:52:12.085 16:17:33 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:52:12.345 16:17:33 ftl -- ftl/common.sh@28 -- # stores=63d160a7-d1c9-4c8d-a279-b0b667483969 00:52:12.345 16:17:33 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:52:12.345 16:17:33 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63d160a7-d1c9-4c8d-a279-b0b667483969 00:52:12.345 16:17:33 ftl -- ftl/ftl.sh@23 -- # killprocess 81534 00:52:12.346 16:17:33 ftl -- common/autotest_common.sh@952 -- # '[' -z 81534 ']' 00:52:12.346 16:17:33 ftl -- common/autotest_common.sh@956 -- # kill -0 81534 00:52:12.346 16:17:33 ftl -- common/autotest_common.sh@957 -- # uname 00:52:12.346 16:17:33 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:52:12.346 16:17:33 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81534 00:52:12.607 16:17:33 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:52:12.607 16:17:33 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:52:12.607 killing process with pid 81534 00:52:12.607 16:17:33 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81534' 00:52:12.607 16:17:33 ftl -- common/autotest_common.sh@971 -- # kill 81534 00:52:12.607 16:17:33 ftl -- common/autotest_common.sh@976 -- # wait 81534 00:52:14.525 16:17:35 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:52:14.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:14.526 Waiting for block devices as requested 00:52:14.526 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:52:14.526 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:52:14.526 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:52:14.788 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:52:20.081 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:52:20.081 16:17:41 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:52:20.081 Remove shared memory files 00:52:20.081 16:17:41 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:52:20.081 16:17:41 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:52:20.081 16:17:41 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:52:20.081 16:17:41 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:52:20.081 16:17:41 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:52:20.081 16:17:41 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:52:20.081 00:52:20.081 real 13m57.773s 00:52:20.081 user 16m6.407s 00:52:20.081 sys 1m26.427s 00:52:20.081 ************************************ 00:52:20.081 END TEST ftl 00:52:20.081 ************************************ 00:52:20.081 16:17:41 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:52:20.081 16:17:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:52:20.081 16:17:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:52:20.081 16:17:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:52:20.081 16:17:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:52:20.081 16:17:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:52:20.081 16:17:41 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:52:20.081 16:17:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:52:20.081 16:17:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:52:20.081 16:17:41 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:52:20.081 16:17:41 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:52:20.081 16:17:41 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:52:20.081 16:17:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:52:20.081 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:52:20.081 16:17:41 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:52:20.081 16:17:41 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:52:20.081 16:17:41 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:52:20.081 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:52:21.477 INFO: APP EXITING 00:52:21.477 INFO: killing all VMs 00:52:21.477 INFO: killing vhost app 00:52:21.477 INFO: EXIT DONE 00:52:21.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:21.999 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:52:22.260 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:52:22.260 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:52:22.260 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:52:22.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:52:23.093 Cleaning 00:52:23.093 Removing: /var/run/dpdk/spdk0/config 00:52:23.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:52:23.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:52:23.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:52:23.093 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:52:23.093 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:52:23.093 Removing: /var/run/dpdk/spdk0/hugepage_info 00:52:23.093 Removing: /var/run/dpdk/spdk0 00:52:23.093 Removing: /var/run/dpdk/spdk_pid56917 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57119 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57332 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57430 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57464 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57587 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57599 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57793 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57879 00:52:23.093 Removing: /var/run/dpdk/spdk_pid57974 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58081 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58178 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58212 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58254 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58319 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58414 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58839 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58903 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58955 00:52:23.093 Removing: /var/run/dpdk/spdk_pid58971 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59072 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59084 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59187 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59197 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59256 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59274 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59327 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59339 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59499 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59536 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59619 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59797 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59881 00:52:23.093 Removing: /var/run/dpdk/spdk_pid59917 00:52:23.093 Removing: /var/run/dpdk/spdk_pid60345 00:52:23.093 Removing: /var/run/dpdk/spdk_pid60443 00:52:23.093 Removing: /var/run/dpdk/spdk_pid60552 00:52:23.093 Removing: /var/run/dpdk/spdk_pid60607 00:52:23.093 Removing: /var/run/dpdk/spdk_pid60627 00:52:23.093 Removing: /var/run/dpdk/spdk_pid60711 00:52:23.093 Removing: /var/run/dpdk/spdk_pid61336 00:52:23.093 Removing: /var/run/dpdk/spdk_pid61372 00:52:23.093 Removing: /var/run/dpdk/spdk_pid61836 00:52:23.093 Removing: /var/run/dpdk/spdk_pid61934 00:52:23.093 Removing: /var/run/dpdk/spdk_pid62044 00:52:23.093 Removing: /var/run/dpdk/spdk_pid62097 00:52:23.093 Removing: /var/run/dpdk/spdk_pid62117 00:52:23.093 Removing: /var/run/dpdk/spdk_pid62148 00:52:23.093 Removing: /var/run/dpdk/spdk_pid63986 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64124 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64128 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64140 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64186 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64190 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64202 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64247 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64251 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64263 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64309 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64313 00:52:23.093 Removing: /var/run/dpdk/spdk_pid64325 00:52:23.093 Removing: /var/run/dpdk/spdk_pid65680 00:52:23.093 Removing: /var/run/dpdk/spdk_pid65783 00:52:23.093 Removing: /var/run/dpdk/spdk_pid67178 00:52:23.093 Removing: /var/run/dpdk/spdk_pid68581 00:52:23.093 Removing: /var/run/dpdk/spdk_pid68657 00:52:23.093 Removing: /var/run/dpdk/spdk_pid68734 00:52:23.093 Removing: /var/run/dpdk/spdk_pid68811 00:52:23.093 Removing: /var/run/dpdk/spdk_pid68910 00:52:23.094 Removing: /var/run/dpdk/spdk_pid68979 00:52:23.094 Removing: /var/run/dpdk/spdk_pid69121 00:52:23.094 Removing: /var/run/dpdk/spdk_pid69474 00:52:23.094 Removing: /var/run/dpdk/spdk_pid69516 00:52:23.094 Removing: /var/run/dpdk/spdk_pid69957 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70143 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70237 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70347 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70390 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70420 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70711 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70762 00:52:23.094 Removing: /var/run/dpdk/spdk_pid70841 00:52:23.094 Removing: /var/run/dpdk/spdk_pid71230 00:52:23.094 Removing: /var/run/dpdk/spdk_pid71376 00:52:23.094 Removing: /var/run/dpdk/spdk_pid72181 00:52:23.094 Removing: /var/run/dpdk/spdk_pid72313 00:52:23.094 Removing: /var/run/dpdk/spdk_pid72481 00:52:23.094 Removing: /var/run/dpdk/spdk_pid72573 00:52:23.094 Removing: /var/run/dpdk/spdk_pid72870 00:52:23.094 Removing: /var/run/dpdk/spdk_pid73116 00:52:23.094 Removing: /var/run/dpdk/spdk_pid73455 00:52:23.094 Removing: /var/run/dpdk/spdk_pid73637 00:52:23.094 Removing: /var/run/dpdk/spdk_pid73828 00:52:23.094 Removing: /var/run/dpdk/spdk_pid73881 00:52:23.094 Removing: /var/run/dpdk/spdk_pid74052 00:52:23.094 Removing: /var/run/dpdk/spdk_pid74084 00:52:23.094 Removing: /var/run/dpdk/spdk_pid74137 00:52:23.094 Removing: /var/run/dpdk/spdk_pid74402 00:52:23.094 Removing: /var/run/dpdk/spdk_pid74628 00:52:23.094 Removing: /var/run/dpdk/spdk_pid75239 00:52:23.094 Removing: /var/run/dpdk/spdk_pid76044 00:52:23.094 Removing: /var/run/dpdk/spdk_pid76759 00:52:23.094 Removing: /var/run/dpdk/spdk_pid77579 00:52:23.094 Removing: /var/run/dpdk/spdk_pid77732 00:52:23.094 Removing: /var/run/dpdk/spdk_pid77820 00:52:23.094 Removing: /var/run/dpdk/spdk_pid78377 00:52:23.094 Removing: /var/run/dpdk/spdk_pid78431 00:52:23.094 Removing: /var/run/dpdk/spdk_pid79112 00:52:23.356 Removing: /var/run/dpdk/spdk_pid79623 00:52:23.356 Removing: /var/run/dpdk/spdk_pid80503 00:52:23.356 Removing: /var/run/dpdk/spdk_pid80626 00:52:23.356 Removing: /var/run/dpdk/spdk_pid80670 00:52:23.356 Removing: /var/run/dpdk/spdk_pid80734 00:52:23.356 Removing: /var/run/dpdk/spdk_pid80792 00:52:23.356 Removing: /var/run/dpdk/spdk_pid80861 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81056 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81137 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81204 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81294 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81323 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81411 00:52:23.356 Removing: /var/run/dpdk/spdk_pid81534 00:52:23.356 Clean 00:52:23.356 16:17:44 -- common/autotest_common.sh@1451 -- # return 0 00:52:23.356 16:17:44 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:52:23.356 16:17:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:52:23.356 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:52:23.356 16:17:44 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:52:23.356 16:17:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:52:23.356 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:52:23.356 16:17:44 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:23.356 16:17:44 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:52:23.356 16:17:44 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:52:23.356 16:17:44 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:52:23.356 16:17:44 -- spdk/autotest.sh@394 -- # hostname 00:52:23.356 16:17:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:52:23.619 geninfo: WARNING: invalid characters removed from testname! 00:52:50.175 16:18:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:50.175 16:18:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:52.723 16:18:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:55.261 16:18:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:57.830 16:18:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:00.371 16:18:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:53:03.669 16:18:24 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:53:03.669 16:18:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:53:03.669 16:18:24 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:53:03.669 16:18:24 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:53:03.669 16:18:24 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:53:03.669 16:18:24 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:53:03.669 + [[ -n 5039 ]] 00:53:03.669 + sudo kill 5039 00:53:03.679 [Pipeline] } 00:53:03.694 [Pipeline] // timeout 00:53:03.699 [Pipeline] } 00:53:03.712 [Pipeline] // stage 00:53:03.716 [Pipeline] } 00:53:03.728 [Pipeline] // catchError 00:53:03.737 [Pipeline] stage 00:53:03.739 [Pipeline] { (Stop VM) 00:53:03.753 [Pipeline] sh 00:53:04.037 + vagrant halt 00:53:07.324 ==> default: Halting domain... 00:53:10.655 [Pipeline] sh 00:53:10.938 + vagrant destroy -f 00:53:13.510 ==> default: Removing domain... 00:53:14.095 [Pipeline] sh 00:53:14.411 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:53:14.422 [Pipeline] } 00:53:14.437 [Pipeline] // stage 00:53:14.443 [Pipeline] } 00:53:14.457 [Pipeline] // dir 00:53:14.462 [Pipeline] } 00:53:14.477 [Pipeline] // wrap 00:53:14.484 [Pipeline] } 00:53:14.497 [Pipeline] // catchError 00:53:14.507 [Pipeline] stage 00:53:14.509 [Pipeline] { (Epilogue) 00:53:14.522 [Pipeline] sh 00:53:14.808 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:53:21.412 [Pipeline] catchError 00:53:21.414 [Pipeline] { 00:53:21.427 [Pipeline] sh 00:53:21.714 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:53:21.714 Artifacts sizes are good 00:53:21.728 [Pipeline] } 00:53:21.744 [Pipeline] // catchError 00:53:21.756 [Pipeline] archiveArtifacts 00:53:21.764 Archiving artifacts 00:53:21.867 [Pipeline] cleanWs 00:53:21.880 [WS-CLEANUP] Deleting project workspace... 00:53:21.880 [WS-CLEANUP] Deferred wipeout is used... 00:53:21.889 [WS-CLEANUP] done 00:53:21.891 [Pipeline] } 00:53:21.906 [Pipeline] // stage 00:53:21.911 [Pipeline] } 00:53:21.927 [Pipeline] // node 00:53:21.932 [Pipeline] End of Pipeline 00:53:21.974 Finished: SUCCESS